Oracle 10 DBA Interview questions and answers

1. Is the following SQL statement syntactically correct? If not, please rewrite it correctly.

SELECT col1 FROM tableA WHERE NOT IN (SELECT col1 FROM tableB);

Ans. SQL is incorrect.

Correct SQL : SELECT col1 FROM tableA WHERE col1 NOT IN (SELECT col1 FROM tableB);



2. What is a more efficient way to write this query, to archive the same set?


Ans: SELECT col1 from tableA minus SELECT col1 from tableB



3.How would you determine that the new query is more efficient than the original query?

Ans: Run explain plan on both query and see the result .



4.How can we find the location of the database trace files from within the data dictionary?

Ans: Generally trace file on the database server machine is located in one of two locations:

1. If you are using a dedicated server connection, the trace file will be generated in the directory specified by

the USER_DUMP_DEST parameter.
2.If you are using a shared server connection, the trace file will be generated in the directory specified by the

BACKGROUND_DUMP_DEST parameter.

you can run sqlplus>SHOW PARAMETER DUMP_DEST
or
select name, value
from v$parameter
where name like '%dump_dest%'


5. What is the correct syntax for a UNIX endless WHILE loop?

while :
do
commands
done



6. Write the SQL statement that will return the name and size of the largest datafile in the database.


SQL> select name,bytes from v$datafile where bytes=(select max(bytes) from v$datafile);


7. What are the proper steps to changing the Oracle database block size?

cold backup all data files and backup controlfile to trace, recreate your database
with the new block size using the backup control file, and restore. It may be easier
with rman. You can not change datbase block size on fly.




8. Using awk, write a script to print the 3rd field of every line.

Ans:

awk '{print }'

awk '{print $3}

awk '{print $3}


9.Under what conditions, is a nested loop better than a merge join?

Ans:

Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving

condition.
It is important to have an index on column of inner join table as this table is probed every time for a new value

from outer table.

Optimizer may not use nested loop in case:

1. No of rows of both the table is quite high
2. Inner query always results in same set of records
3. The access path of inner table is independent of data coming from outer table.


merge join is used to join two independent data sources. They perform better than nested loop when the volume of

data is big in tables
but not as good as hash joins in general.



10.Which database views would you use to ascertain the number of commits a user's session has performed?

Joining V$SESSTAT ,V$STATNAME

select * from V$SESSTAT a ,V$STATNAME b where b.CLASS=a.STATISTIC# and b.NAME='user commits' and a.sid=



11.What does the #!bin/ksh at the beginning of a shell script do? Why should it be there?

Ans: On the first line of an interpreter script, the "#!", is the name of a program which should be used to

interpret the contents of the file.
For instance, if the first line contains "#! /bin/ksh", then the contents of the file are executed as a korn shell

script.




12.What command is used to find the status of Oracle 10g Clusterware (CRS) and the various components it manages

(ONS, VIP, listener, instances, etc.)?

Ans:

$ocrcheck


13.Describe a scenario in which a vendor clusterware is required, in addition to the Oracle 10g Clusterware.

If you choose external redundancy for the OCR and voting disk, then to enable redundancy, the disk subsystem must be configurable for RAID mirroring/vendor clusterware. Otherwise, your system may be vulnerable because the OCR and voting disk are single points of failure.



14.How would you find the interconnect IP address from any node within an Oracle 10g RAC configuration?

using oifcfg command.

se the oifcfg -help command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are

optional depending on the command, are:

*nodename—Name of the Oracle Clusterware node as listed in the output from the olsnodes command
*if_name—Name by which the interface is configured in the system
*subnet—Subnet address of the interface
*if_type—Type of interface: public or cluster_interconnect

You can use OIFCFG to list the interface names and the subnets of all of the interfaces available on the local node

by executing the iflist keyword as shown in this example:

oifcfg iflist
hme0 139.185.141.0
qfe0 204.152.65.16


You can also retrieve specific OIFCFG information with a getif command using the following syntax:
oifcfg getif [ [-global | -node nodename] [-if if_name[/subnet]] [-type if_type] ]

To store a new interface use the setif keyword. For example, to store the interface hme0, with the subnet

139.185.141.0, as a global interface (to be used as an interconnect for all of the RAC instances in your cluster),

you would use the command:

oifcfg setif -global hme0/139.185.141.0:cluster_interconnect


For a cluster interconnect that exists between only two nodes, for example rac1 and rac2, you could create the cms0

interface with the following commands, assuming 139.185.142.0 is the subnet addresses for the interconnect on rac1

and rac2 respectively:

oifcfg setif -global cms0/139.185.142.0:cluster_interconnect


Use the OIFCFG delif command to delete the stored configuration for global or node-specific interfaces. A specific

node-specific or global interface can be deleted by supplying the interface name, with an optional subnet, on the

command line. Without the -node or -global options, the delif keyword deletes either the given interface or all of

the global and node-specific interfaces on all of the nodes in the cluster. For example, the following command

deletes the global interface named qfe0 for the subnet 204.152.65.0:

oifcfg delif -global qfe0/204.152.65.0


On the other hand, the next command deletes all of the global interfaces stored with OIFCFG:

oifcfg delif -global



15.What is the Purpose of the voting disk in Oracle 10g Clusterware?

Voting disk record node membership information. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster.


16.What is the purpose of the OCR in Oracle 10g Clusterware?

Ans: Oracle Cluster Registry (OCR) is a component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster.

The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.



17. In Oracle Streams archived log downstream capture, which database view can be used to determine which archived

logs are no longer needed by the capture process?

Ans: V$ARCHIVE_DEST_STATUS

Friday, September 5, 2008

What's New in Oracle Data Guard 11g?

What's New in Oracle Data Guard 11g?

Data Guard 11g has been made even more straightforward to implement and manage using either the Data Guard broker command line interface or Enterprise Manager Grid Control. Data Guard 11g adds new ways to detect corruptions that avoid data loss and extended down time. Data Guard 11g includes flexible configuration options for fast, automatic failover in both Maximum Availability and Maximum Performance protection modes. Data Guard 11g physical standby databases can offload the primary database of queries and reporting by being open read-only while apply is active, effectively creating a new level of performance protection and return on investment for every Data Guard environment. The sections below highlight some of the key new features of Oracle Data Guard 11g.

Snapshot Standby

This is a new type of standby database that is created from a physical standby database. Once created, a snapshot standby can be opened read-write to process transactions that are independent of the primary database for test or other purposes. A snapshot standby will continue to receive and archive updates from the primary database, however, redo data received from the primary will not be applied until the snapshot standby is converted back into a physical standby database and all updates that were made to it while it was a snapshot standby are discarded. The ability for the snapshot standby to archive data received from the primary while the snapshot is open read-write means production data stays protected at all times.

Support for the Oracle Active Data Guard Option

The Active Data Guard Option enhances Quality of Service by offloading resource intensive workloads from your production database to one or more synchronized standby databases. Active Data Guard accomplishes this by enabling read-only access to a physical standby database for queries, real-time reporting, web-based access, etc., while continuously applying changes received from the production database. Active Data Guard also eliminates the overhead of performing backups on production systems by enabling RMAN block-change tracking and fast incremental backups using a physical standby database.

Failover enhancements

Data Guard 10g Release 2 introduced automatic failover using the new feature Fast-Start Failover with Maximum Availability protection mode (SYNC). Data Guard 11g extends Fast-Start Failover support Maximum Performance mode (ASYNC) by adding a user configurable data loss threshold that guarantees an automatic failover will never result in data loss that exceeds the desired recovery point objective (RPO).

Users can also configure an automatic failover to occur immediately without waiting for the Fast-Start Failover threshold time period to expire based on designated health check conditions or any desired ORA-nnnnn error.

A new DBMS_DG PL/SQL package can be used to enable applications to notify the Fast-Start Failover Observer process to initiate an automatic failover.

Other enhancements enable faster failovers across a range of Data Guard configurations – both manual and automatic failovers, and both logical and physical standby databases.

Enhanced data protection

A Physical Standby can detect lost datafile writes caused by faulty storage hardware and firmware that lead to data corruption on either the primary or the standby database. Data Guard will compare versions of blocks on the standby with that of the incoming redo stream. If there is a version discrepancy it implies a lost write. The user can then failover to the standby database and restore data consistency..

Redo Transport enhancements

Data Guard 11g implements a new streaming design that significantly increases the throughput of redo transport in Maximum Performance protection mode (during asynchronous redo transport and when using ARCn processes to resolve gaps). The benefit of these enhancements will be particularly noticeable in high latency – WAN environments.

Enhancements to synchronous redo transport in Maximum Availability mode will further reduce the impact of network latency on primary database throughput, expanding the number of applications that will be able to tolerate synchronous zero data loss protection, and extending the distance between primary and standby databases that is practical for such implementations.

Faster resynchronization of standby databases following network or standby database outages when using the Oracle Advanced Compression Option. One of the capabilities of the Advanced Compression Option enables automatic network compression of archive logs shipped by Data Guard to resolve gaps on the standby database. This feature is particularly beneficial for bandwidth constrained, high latency network environments.

Apply performance enhancements

Parallel media recovery significantly enhances Redo Apply performance (physical standby) for all workload profiles.

SQL Apply enhancements for logical standby increase apply performance for inserts and updates to tables that are not partitioned and that do not contain LOB, LONG or XML type column. Also, SQL Apply now applies parallel DDL in parallel, rather than serially as was the practice in previous releases.

Transient Logical Standby

Users can convert a physical standby to a transient logical standby database to effect a rolling database upgrade, and then revert the standby to its original state as a physical standby database once the upgrade is complete - using the KEEP IDENTITY clause. This benefits physical standby users who wish to execute a rolling database upgrade without investing in redundant storage otherwise needed to create a logical standby database.

Fine-grained monitoring of Data Guard configurations

Oracle Enterprise Manager has been enhanced to provide granular, up-to-date monitoring of Data Guard configurations, so that administrators may make an informed and expedient decision regarding managing this configuration.

RMAN enhancements for Data Guard

RMAN DUPLICATE can create standby databases directly from the primary database to the standby system without requiring interim storage at either location.

In addition to Real-time Query discussed above, the Active Data Guard Option adds support for RMAN block-change tracking on a physical standby database enabling fast incremental backup of a standby database. Tests have shown that incremental backups on a database with a moderate rate of change can complete up to 20x faster when using RMAN block-change tracking, compared to traditional incremental backups.

Enhanced security

SSL authentication can be used in lieu of password file to authenticate redo transmission. Note: SSL authentication requires use of PKI Certificates, the Oracle Advanced Security Option and Oracle Internet Directory.

Enhanced SQL Apply support

* XMLType data type (when stored as CLOB)
* Ability to execute DDL in parallel on a logical standby database
* Transparent Data Encryption (TDE)
* DBMS_FGA (Fine Grained Auditing)
* DBMS_RLS (Virtual Private Database)

SQL Apply Manageability

Scheduler jobs can be created on a standby database using the DBMS_SCHEDULER package and can be associated with an appropriate database role such that they run when intended (e.g. when the database is the primary, standby, or both).

Switchover using SQL Apply with Oracle RAC databases no longer requires the prior shutdown of all but the first instance in each Oracle RAC cluster.

Data Guard SQL Apply parameters may also be set dynamically without requiring SQL Apply to be restarted. Using the DBMS_LOGSTDBY.APPLY_SET package, you can dynamically set initialization parameters, thus improving the manageability, uptime, and automation of a logical standby configuration.

Data Guard Broker

* Enables automatic database failover for configurations using either Maximum Availability or Maximum Performance mode.
* Enables configurable events to trigger immediate automatic failover to a target standby database.
* Improved support for redo transport options, enabling an administrator to specify a connect description for Redo Transport Services.
* Elimination of database downtime when changing the protection mode to and from Maximum Availability and Maximum Performance.
* Support for single instance databases configured for HA using Oracle Clusterware and cold failover clusters.

Enterprise Manager Grid Control 11g

* Creation of standby databases from existing RMAN backups
* Creation of an Oracle RAC standby database from an Oracle RAC primary
* Automated standby clones for reporting, development, and test
* Automatic propagation of Enterprise Manager jobs and metric thresholds to the new primary database upon switchover or failover
* Fault-tolerant observer for Fast-Start Failover
* Enterprise Manager Data Recovery Advisor will utilize available standby databases when making recommendations for Intelligent Data Repair (IDR)

No comments:

Blog Archive