Oracle 10 DBA Interview questions and answers

1. Is the following SQL statement syntactically correct? If not, please rewrite it correctly.

SELECT col1 FROM tableA WHERE NOT IN (SELECT col1 FROM tableB);

Ans. SQL is incorrect.

Correct SQL : SELECT col1 FROM tableA WHERE col1 NOT IN (SELECT col1 FROM tableB);



2. What is a more efficient way to write this query, to archive the same set?


Ans: SELECT col1 from tableA minus SELECT col1 from tableB



3.How would you determine that the new query is more efficient than the original query?

Ans: Run explain plan on both query and see the result .



4.How can we find the location of the database trace files from within the data dictionary?

Ans: Generally trace file on the database server machine is located in one of two locations:

1. If you are using a dedicated server connection, the trace file will be generated in the directory specified by

the USER_DUMP_DEST parameter.
2.If you are using a shared server connection, the trace file will be generated in the directory specified by the

BACKGROUND_DUMP_DEST parameter.

you can run sqlplus>SHOW PARAMETER DUMP_DEST
or
select name, value
from v$parameter
where name like '%dump_dest%'


5. What is the correct syntax for a UNIX endless WHILE loop?

while :
do
commands
done



6. Write the SQL statement that will return the name and size of the largest datafile in the database.


SQL> select name,bytes from v$datafile where bytes=(select max(bytes) from v$datafile);


7. What are the proper steps to changing the Oracle database block size?

cold backup all data files and backup controlfile to trace, recreate your database
with the new block size using the backup control file, and restore. It may be easier
with rman. You can not change datbase block size on fly.




8. Using awk, write a script to print the 3rd field of every line.

Ans:

awk '{print }'

awk '{print $3}

awk '{print $3}


9.Under what conditions, is a nested loop better than a merge join?

Ans:

Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving

condition.
It is important to have an index on column of inner join table as this table is probed every time for a new value

from outer table.

Optimizer may not use nested loop in case:

1. No of rows of both the table is quite high
2. Inner query always results in same set of records
3. The access path of inner table is independent of data coming from outer table.


merge join is used to join two independent data sources. They perform better than nested loop when the volume of

data is big in tables
but not as good as hash joins in general.



10.Which database views would you use to ascertain the number of commits a user's session has performed?

Joining V$SESSTAT ,V$STATNAME

select * from V$SESSTAT a ,V$STATNAME b where b.CLASS=a.STATISTIC# and b.NAME='user commits' and a.sid=



11.What does the #!bin/ksh at the beginning of a shell script do? Why should it be there?

Ans: On the first line of an interpreter script, the "#!", is the name of a program which should be used to

interpret the contents of the file.
For instance, if the first line contains "#! /bin/ksh", then the contents of the file are executed as a korn shell

script.




12.What command is used to find the status of Oracle 10g Clusterware (CRS) and the various components it manages

(ONS, VIP, listener, instances, etc.)?

Ans:

$ocrcheck


13.Describe a scenario in which a vendor clusterware is required, in addition to the Oracle 10g Clusterware.

If you choose external redundancy for the OCR and voting disk, then to enable redundancy, the disk subsystem must be configurable for RAID mirroring/vendor clusterware. Otherwise, your system may be vulnerable because the OCR and voting disk are single points of failure.



14.How would you find the interconnect IP address from any node within an Oracle 10g RAC configuration?

using oifcfg command.

se the oifcfg -help command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are

optional depending on the command, are:

*nodename—Name of the Oracle Clusterware node as listed in the output from the olsnodes command
*if_name—Name by which the interface is configured in the system
*subnet—Subnet address of the interface
*if_type—Type of interface: public or cluster_interconnect

You can use OIFCFG to list the interface names and the subnets of all of the interfaces available on the local node

by executing the iflist keyword as shown in this example:

oifcfg iflist
hme0 139.185.141.0
qfe0 204.152.65.16


You can also retrieve specific OIFCFG information with a getif command using the following syntax:
oifcfg getif [ [-global | -node nodename] [-if if_name[/subnet]] [-type if_type] ]

To store a new interface use the setif keyword. For example, to store the interface hme0, with the subnet

139.185.141.0, as a global interface (to be used as an interconnect for all of the RAC instances in your cluster),

you would use the command:

oifcfg setif -global hme0/139.185.141.0:cluster_interconnect


For a cluster interconnect that exists between only two nodes, for example rac1 and rac2, you could create the cms0

interface with the following commands, assuming 139.185.142.0 is the subnet addresses for the interconnect on rac1

and rac2 respectively:

oifcfg setif -global cms0/139.185.142.0:cluster_interconnect


Use the OIFCFG delif command to delete the stored configuration for global or node-specific interfaces. A specific

node-specific or global interface can be deleted by supplying the interface name, with an optional subnet, on the

command line. Without the -node or -global options, the delif keyword deletes either the given interface or all of

the global and node-specific interfaces on all of the nodes in the cluster. For example, the following command

deletes the global interface named qfe0 for the subnet 204.152.65.0:

oifcfg delif -global qfe0/204.152.65.0


On the other hand, the next command deletes all of the global interfaces stored with OIFCFG:

oifcfg delif -global



15.What is the Purpose of the voting disk in Oracle 10g Clusterware?

Voting disk record node membership information. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster.


16.What is the purpose of the OCR in Oracle 10g Clusterware?

Ans: Oracle Cluster Registry (OCR) is a component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster.

The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.



17. In Oracle Streams archived log downstream capture, which database view can be used to determine which archived

logs are no longer needed by the capture process?

Ans: V$ARCHIVE_DEST_STATUS

Sunday, September 7, 2008

Oracle data pump 10g expdp help

Help

The HELP=Y option displays the available parameters:
expdp help=y

Export: Release 10.1.0.2.0 - Production on Tuesday, 23 March, 2004 8:33

Copyright (c) 2003, Oracle. All rights reserved.


The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:

Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword Description (Default)
------------------------------------------------------------------------------
ATTACH Attach to existing job, e.g. ATTACH [=job name].
CONTENT Specifies data to unload where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dumpfiles and logfiles.
DUMPFILE List of destination dump files (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE Calculate job estimates where the valid keywords are:
(BLOCKS) and STATISTICS.
ESTIMATE_ONLY Calculate job estimates without performing the export.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN SCN used to set session snapshot back to.
FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Export entire database (N).
HELP Display Help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of export job to create.
LOGFILE Log file name (export.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile (N).
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to export a subset of a table.
SCHEMAS List of schemas to export (login schema).
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
TABLES Identifies a list of tables to export - one schema only.
TABLESPACES Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command Description
------------------------------------------------------------------------------
ADD_FILE Add dumpfile to dumpfile set.
ADD_FILE=dumpfile-name
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT Quit client session and leave job running.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=.
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS=[interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.

No comments:

Blog Archive