Oracle 10 DBA Interview questions and answers

1. Is the following SQL statement syntactically correct? If not, please rewrite it correctly.

SELECT col1 FROM tableA WHERE NOT IN (SELECT col1 FROM tableB);

Ans. SQL is incorrect.

Correct SQL : SELECT col1 FROM tableA WHERE col1 NOT IN (SELECT col1 FROM tableB);



2. What is a more efficient way to write this query, to archive the same set?


Ans: SELECT col1 from tableA minus SELECT col1 from tableB



3.How would you determine that the new query is more efficient than the original query?

Ans: Run explain plan on both query and see the result .



4.How can we find the location of the database trace files from within the data dictionary?

Ans: Generally trace file on the database server machine is located in one of two locations:

1. If you are using a dedicated server connection, the trace file will be generated in the directory specified by

the USER_DUMP_DEST parameter.
2.If you are using a shared server connection, the trace file will be generated in the directory specified by the

BACKGROUND_DUMP_DEST parameter.

you can run sqlplus>SHOW PARAMETER DUMP_DEST
or
select name, value
from v$parameter
where name like '%dump_dest%'


5. What is the correct syntax for a UNIX endless WHILE loop?

while :
do
commands
done



6. Write the SQL statement that will return the name and size of the largest datafile in the database.


SQL> select name,bytes from v$datafile where bytes=(select max(bytes) from v$datafile);


7. What are the proper steps to changing the Oracle database block size?

cold backup all data files and backup controlfile to trace, recreate your database
with the new block size using the backup control file, and restore. It may be easier
with rman. You can not change datbase block size on fly.




8. Using awk, write a script to print the 3rd field of every line.

Ans:

awk '{print }'

awk '{print $3}

awk '{print $3}


9.Under what conditions, is a nested loop better than a merge join?

Ans:

Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving

condition.
It is important to have an index on column of inner join table as this table is probed every time for a new value

from outer table.

Optimizer may not use nested loop in case:

1. No of rows of both the table is quite high
2. Inner query always results in same set of records
3. The access path of inner table is independent of data coming from outer table.


merge join is used to join two independent data sources. They perform better than nested loop when the volume of

data is big in tables
but not as good as hash joins in general.



10.Which database views would you use to ascertain the number of commits a user's session has performed?

Joining V$SESSTAT ,V$STATNAME

select * from V$SESSTAT a ,V$STATNAME b where b.CLASS=a.STATISTIC# and b.NAME='user commits' and a.sid=



11.What does the #!bin/ksh at the beginning of a shell script do? Why should it be there?

Ans: On the first line of an interpreter script, the "#!", is the name of a program which should be used to

interpret the contents of the file.
For instance, if the first line contains "#! /bin/ksh", then the contents of the file are executed as a korn shell

script.




12.What command is used to find the status of Oracle 10g Clusterware (CRS) and the various components it manages

(ONS, VIP, listener, instances, etc.)?

Ans:

$ocrcheck


13.Describe a scenario in which a vendor clusterware is required, in addition to the Oracle 10g Clusterware.

If you choose external redundancy for the OCR and voting disk, then to enable redundancy, the disk subsystem must be configurable for RAID mirroring/vendor clusterware. Otherwise, your system may be vulnerable because the OCR and voting disk are single points of failure.



14.How would you find the interconnect IP address from any node within an Oracle 10g RAC configuration?

using oifcfg command.

se the oifcfg -help command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are

optional depending on the command, are:

*nodename—Name of the Oracle Clusterware node as listed in the output from the olsnodes command
*if_name—Name by which the interface is configured in the system
*subnet—Subnet address of the interface
*if_type—Type of interface: public or cluster_interconnect

You can use OIFCFG to list the interface names and the subnets of all of the interfaces available on the local node

by executing the iflist keyword as shown in this example:

oifcfg iflist
hme0 139.185.141.0
qfe0 204.152.65.16


You can also retrieve specific OIFCFG information with a getif command using the following syntax:
oifcfg getif [ [-global | -node nodename] [-if if_name[/subnet]] [-type if_type] ]

To store a new interface use the setif keyword. For example, to store the interface hme0, with the subnet

139.185.141.0, as a global interface (to be used as an interconnect for all of the RAC instances in your cluster),

you would use the command:

oifcfg setif -global hme0/139.185.141.0:cluster_interconnect


For a cluster interconnect that exists between only two nodes, for example rac1 and rac2, you could create the cms0

interface with the following commands, assuming 139.185.142.0 is the subnet addresses for the interconnect on rac1

and rac2 respectively:

oifcfg setif -global cms0/139.185.142.0:cluster_interconnect


Use the OIFCFG delif command to delete the stored configuration for global or node-specific interfaces. A specific

node-specific or global interface can be deleted by supplying the interface name, with an optional subnet, on the

command line. Without the -node or -global options, the delif keyword deletes either the given interface or all of

the global and node-specific interfaces on all of the nodes in the cluster. For example, the following command

deletes the global interface named qfe0 for the subnet 204.152.65.0:

oifcfg delif -global qfe0/204.152.65.0


On the other hand, the next command deletes all of the global interfaces stored with OIFCFG:

oifcfg delif -global



15.What is the Purpose of the voting disk in Oracle 10g Clusterware?

Voting disk record node membership information. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster.


16.What is the purpose of the OCR in Oracle 10g Clusterware?

Ans: Oracle Cluster Registry (OCR) is a component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster.

The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.



17. In Oracle Streams archived log downstream capture, which database view can be used to determine which archived

logs are no longer needed by the capture process?

Ans: V$ARCHIVE_DEST_STATUS

Wednesday, September 3, 2008

what is ADDM in oracle 10g. how it helps to diagnose the performance problem.

New diagnostics engine in Oracle 10g helps detect and diagnose performance problems

In my last column, I focused on one of Oracle Database 10g's quick wins for DBAs—using the new SQL Tuning Advisor to quickly tune poorly performing SQL statements—and touched only briefly on the new built-in diagnostic engine, Automatic Database Diagnostic Monitor (ADDM), which helps you easily identify problematic SQL statements (see "Advice and Consent" in the March/April 2004 issue for more information about using the SQL Tuning Advisor and the DBMS_SQLTUNE package). Identifying high-load SQL statements is just one of ADDM's many capabilities. A self-diagnostic engine built into the Oracle Database 10g kernel, ADDM automatically detects and diagnoses common performance problems, including:

* Hardware issues related to excessive I/O
* CPU bottlenecks
* Connection management issues
* Excessive parsing
* Concurrency issues, such as contention for locks
* PGA, buffer-cache, and log-buffer-sizing issues
* Issues specific to Oracle Real Application Clusters (RAC) deployments, such as global cache hot blocks and objects and interconnect latency issues

Let's continue exploring the new performance tuning capabilities of Oracle Database 10g by taking a look at ADDM.

Automatic, Effective Problem Diagnosis

ADDM automatically analyzes the performance of the database at regular intervals (once an hour by default) and identifies any performance issues. ADDM's diagnostic engine is modeled as a classification system that quickly and effectively sifts through the statistical data in Automatic Workload Repository (AWR)—a newly introduced built-in repository in Oracle Database 10g containing performance statistics and workload information—and evaluates problem areas, based on their impact on overall database performance. ADDM's processing logic encompasses best practices and the accumulated expertise of tuning professionals throughout the industry. Ultimately ADDM presents a set of "findings" that not only identifies the cause of performance problems but can also inform administrators that certain database subcomponents, such as I/O, are functioning properly and do not require any further investigation.

Automation is just one of several unique benefits of ADDM. Another important benefit is ADDM's ability to identify the root cause of problems. Just as physicians achieve better results for their patients by treating causes rather than symptoms, DBAs can achieve better database performance by finding the root cause of performance issues before making changes to the system. However, finding a root cause can be difficult and time-consuming, given that symptoms can sometimes mask the real problem.

ADDM can distinguish between the cause and symptoms of a problem. It's able to do so, in part, by using the extensive new data (events, statistics) generated by the database at runtime. Oracle database kernel code has always been instrumented to provide raw performance data, but in this release, the instrumentation is more comprehensive.

For example, wait events are more granular, with numerous locks and latches now separated into distinct wait events. Furthermore, wait events are now grouped into classes—Application, Administration, Commit, Concurrency, Network, User I/O, System I/O, and Configuration, for example—to facilitate processing through ADDM.

In addition to the new wait-event model, Oracle Database 10g also includes an array of new statistics that provide performance data on everything from operating system performance—hard disk I/O statistics, CPU utilization, and network statistics, for example—to cumulative statistics at the system and session levels. One of the most important new statistics is database time, or DB time—the amount of time the database is actually working on database calls.

It's About Time

ADDM uses DB time to measure throughput; it's the total time spent by foreground processes—such as a read I/O—waiting for a database resource, running on the CPU, and waiting for a free CPU. The overarching goal of ADDM is to reduce the value of DB time across the system, thereby improving overall throughput.

Operational information about active sessions—those using the CPU—as well as sessions waiting for the CPU, is sampled every second and held in a scrolling buffer in the server memory—Active Session History (ASH), an important new feature of Oracle Database 10g. ASH (V$ACTIVE_SESSION_HISTORY) maintains historical data; in previous Oracle releases, DBAs had information about currently active sessions only— not historical data—so figuring out the cause of performance problems after the fact wasn't easy.

Data from ASH, along with other important database statistics, persists on the disk in the form of "snapshots." Snapshots are taken every hour and stored for seven days, by default, but DBAs can change settings for both snapshot frequency and storage duration. AWR provides the raw information for ADDM analysis, which begins automatically as soon as a new AWR snapshot is taken.

Getting Started with ADDM

Because ADDM runs automatically after each new AWR snapshot is taken, no manual steps are required to generate its findings. But you can run ADDM on demand by creating a new snapshot manually, by using either Oracle Enterprise Manager (OEM) or the command-line interface. The following shows creation of a snapshot from the command line:

SQL> exec dbms_workload_repository.create_
snapshot();
PL/SQL procedure successfully completed.

After the new snapshot is created, its information (snap_id, begin_interval_ time, and the like) is populated to the DBA_HIST_SNAPSHOT dictionary view. A few seconds after you take the snapshot, ADDM brings new findings to the surface, based on analysis of that snapshot.

When you access the database instance in the new OEM console, you see at a glance how the database is operating in general, on a dashboard-style home page. You can quickly verify that the server has adequate CPU and memory resources and get a general overview of the system from this one page. The Active Sessions pie chart shows the current distribution across the server in three key areas: Using CPU, Waiting I/O, and Waiting Other. You can drill down on any of these items to see a line graph of activity over the last 24 hours (and change this to values such as 31 days, 7 days, or just a few minutes).

The results of the latest ADDM run are displayed in the Performance Analysis section of the home page, as shown in Figure 2, which provides a summary view of the top ADDM findings: the impact of the diagnosed problem as a percentage of overall database performance, a description of the finding, and a recommendations summary. The text of each finding is a hyperlink to more-detailed information on that particular finding. ADDM recommendations can include running one of the new advisors—SQL Tuning Advisor, SQL Access Advisor, Space Management Advisor, and so on—when applicable. For example, a solution for a particular finding can involve using SQL Tuning Advisor to tune a particular SQL statement or adding more resources to the machine. Clicking on the recommendation link displays the detailed finding view, which includes the full text of any recommendations.

Conclusion

Oracle Database 10g provides DBAs with comprehensive, quantifiable diagnostic information that can be used to proactively monitor and prevent problems as well as respond to them when they arise. There's an enormous amount of new statistical information available to help keep the database running optimally. More important, the automatic performance-diagnostic capabilities of AWR and ADDM will not only help you resolve critical performance problems quickly but will also provide you with hard-and-fast, quantifiable data for making the business case for new hardware, additional staff, application development initiatives, or better training for employees. And Oracle plans to continually update ADDM's problem classification tree and diagnostic rules to keep pace with new features and technologies introduced in the database and underlying operating platform.

No comments:

Blog Archive