Oracle 10 DBA Interview questions and answers

1. Is the following SQL statement syntactically correct? If not, please rewrite it correctly.

SELECT col1 FROM tableA WHERE NOT IN (SELECT col1 FROM tableB);

Ans. SQL is incorrect.

Correct SQL : SELECT col1 FROM tableA WHERE col1 NOT IN (SELECT col1 FROM tableB);



2. What is a more efficient way to write this query, to archive the same set?


Ans: SELECT col1 from tableA minus SELECT col1 from tableB



3.How would you determine that the new query is more efficient than the original query?

Ans: Run explain plan on both query and see the result .



4.How can we find the location of the database trace files from within the data dictionary?

Ans: Generally trace file on the database server machine is located in one of two locations:

1. If you are using a dedicated server connection, the trace file will be generated in the directory specified by

the USER_DUMP_DEST parameter.
2.If you are using a shared server connection, the trace file will be generated in the directory specified by the

BACKGROUND_DUMP_DEST parameter.

you can run sqlplus>SHOW PARAMETER DUMP_DEST
or
select name, value
from v$parameter
where name like '%dump_dest%'


5. What is the correct syntax for a UNIX endless WHILE loop?

while :
do
commands
done



6. Write the SQL statement that will return the name and size of the largest datafile in the database.


SQL> select name,bytes from v$datafile where bytes=(select max(bytes) from v$datafile);


7. What are the proper steps to changing the Oracle database block size?

cold backup all data files and backup controlfile to trace, recreate your database
with the new block size using the backup control file, and restore. It may be easier
with rman. You can not change datbase block size on fly.




8. Using awk, write a script to print the 3rd field of every line.

Ans:

awk '{print }'

awk '{print $3}

awk '{print $3}


9.Under what conditions, is a nested loop better than a merge join?

Ans:

Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving

condition.
It is important to have an index on column of inner join table as this table is probed every time for a new value

from outer table.

Optimizer may not use nested loop in case:

1. No of rows of both the table is quite high
2. Inner query always results in same set of records
3. The access path of inner table is independent of data coming from outer table.


merge join is used to join two independent data sources. They perform better than nested loop when the volume of

data is big in tables
but not as good as hash joins in general.



10.Which database views would you use to ascertain the number of commits a user's session has performed?

Joining V$SESSTAT ,V$STATNAME

select * from V$SESSTAT a ,V$STATNAME b where b.CLASS=a.STATISTIC# and b.NAME='user commits' and a.sid=



11.What does the #!bin/ksh at the beginning of a shell script do? Why should it be there?

Ans: On the first line of an interpreter script, the "#!", is the name of a program which should be used to

interpret the contents of the file.
For instance, if the first line contains "#! /bin/ksh", then the contents of the file are executed as a korn shell

script.




12.What command is used to find the status of Oracle 10g Clusterware (CRS) and the various components it manages

(ONS, VIP, listener, instances, etc.)?

Ans:

$ocrcheck


13.Describe a scenario in which a vendor clusterware is required, in addition to the Oracle 10g Clusterware.

If you choose external redundancy for the OCR and voting disk, then to enable redundancy, the disk subsystem must be configurable for RAID mirroring/vendor clusterware. Otherwise, your system may be vulnerable because the OCR and voting disk are single points of failure.



14.How would you find the interconnect IP address from any node within an Oracle 10g RAC configuration?

using oifcfg command.

se the oifcfg -help command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are

optional depending on the command, are:

*nodename—Name of the Oracle Clusterware node as listed in the output from the olsnodes command
*if_name—Name by which the interface is configured in the system
*subnet—Subnet address of the interface
*if_type—Type of interface: public or cluster_interconnect

You can use OIFCFG to list the interface names and the subnets of all of the interfaces available on the local node

by executing the iflist keyword as shown in this example:

oifcfg iflist
hme0 139.185.141.0
qfe0 204.152.65.16


You can also retrieve specific OIFCFG information with a getif command using the following syntax:
oifcfg getif [ [-global | -node nodename] [-if if_name[/subnet]] [-type if_type] ]

To store a new interface use the setif keyword. For example, to store the interface hme0, with the subnet

139.185.141.0, as a global interface (to be used as an interconnect for all of the RAC instances in your cluster),

you would use the command:

oifcfg setif -global hme0/139.185.141.0:cluster_interconnect


For a cluster interconnect that exists between only two nodes, for example rac1 and rac2, you could create the cms0

interface with the following commands, assuming 139.185.142.0 is the subnet addresses for the interconnect on rac1

and rac2 respectively:

oifcfg setif -global cms0/139.185.142.0:cluster_interconnect


Use the OIFCFG delif command to delete the stored configuration for global or node-specific interfaces. A specific

node-specific or global interface can be deleted by supplying the interface name, with an optional subnet, on the

command line. Without the -node or -global options, the delif keyword deletes either the given interface or all of

the global and node-specific interfaces on all of the nodes in the cluster. For example, the following command

deletes the global interface named qfe0 for the subnet 204.152.65.0:

oifcfg delif -global qfe0/204.152.65.0


On the other hand, the next command deletes all of the global interfaces stored with OIFCFG:

oifcfg delif -global



15.What is the Purpose of the voting disk in Oracle 10g Clusterware?

Voting disk record node membership information. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster.


16.What is the purpose of the OCR in Oracle 10g Clusterware?

Ans: Oracle Cluster Registry (OCR) is a component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster.

The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.



17. In Oracle Streams archived log downstream capture, which database view can be used to determine which archived

logs are no longer needed by the capture process?

Ans: V$ARCHIVE_DEST_STATUS

Wednesday, September 24, 2008

Setting oracle database in archive mode from non-archive mode

Lets start by checking the current archive mode.

SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;

LOG_MODE
------------
NOARCHIVELOG


So we're in NOARCHIVELOG mode and we need to change. We can use a database alter statement, but that won't be permanent, so lets just update the pfile directly.C:\oracle\product\10.2.0\admin\orcl

###########################################
# Enable Archive Log
###########################################
log_archive_dest_1='location=C:\oracle\product\10.2.0\oradata\archive'
log_archive_start=TRUE

note: You must first create the archive directory and grant access priv to oracle user.

Note that we're not actually required to specify the location of the log destination, but if you don't it'll end up in strange places (in my test it went to $ORACLE_HOME/dbs making a mess). You can specify as many as 10 diffrent archive log destinations by using the paramters log_archive_dest_1 through log_archive_dest_10. Remember, if you run out of space in your archive log destination the database will shut down!

Now we can startup the database in mount mode and put it in archivelog mode.


SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 612368384 bytes
Fixed Size 1292036 bytes
Variable Size 230689020 bytes
Database Buffers 373293056 bytes
Redo Buffers 7094272 bytes
Database mounted.
Database opened.
SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;

NOARCHIVELOG

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 612368384 bytes
Fixed Size 1292036 bytes
Variable Size 234883324 bytes
Database Buffers 369098752 bytes
Redo Buffers 7094272 bytes
Database mounted.
SQL> alter database archivelog;

Database altered.

SQL> alter database open;

Database altered.

SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;

ARCHIVELOG

SQL>

Sunday, September 21, 2008

Oracle Version Information - How to CheckYour Current Release Number ?

To identify the release of Oracle Database that is currently installed and to see the release levels of other database components you are using, query the data dictionary view PRODUCT_COMPONENT_VERSION. A sample query follows. (You can also query the V$VERSION view to see component-level information.) Other product release levels may increment independent of the database server.

COL PRODUCT FORMAT A35
COL VERSION FORMAT A15
COL STATUS FORMAT A15
SELECT * FROM PRODUCT_COMPONENT_VERSION;

PRODUCT VERSION STATUS
---------------------------------------- ----------- -----------
NLSRTL 10.2.0.1.0 Production
Oracle Database 10g Enterprise Edition 10.2.0.1.0 Prod
PL/SQL 10.2.0.1.0 Production
...

Database and Instance Shutdown

Close a Database

When you close a database, Oracle writes all database data and recovery data in the SGA to the datafiles and redo log files, respectively. Next, Oracle closes all online datafiles and redo log files. (Any offline datafiles of any offline tablespaces have been closed already. If you subsequently reopen the database, any tablespace that was offline and its datafiles remain offline and closed, respectively.) At this point, the database is closed and inaccessible for normal operations. The control files remain open after a database is closed but still mounted.

Close the Database by Terminating the Instance

In rare emergency situations, you can terminate the instance of an open database to close and completely shut down the database instantaneously. This process is fast, because the operation of writing all data in the buffers of the SGA to the datafiles and redo log files is skipped. The subsequent reopening of the database requires recovery, which Oracle performs automatically.

Note:

If a system or power failure occurs while the database is open, then the instance is, in effect, terminated, and recovery is performed when the database is reopened.

Unmount a Database

After the database is closed, Oracle unmounts the database to disassociate it from the instance. At this point, the instance remains in the memory of your computer.

After a database is unmounted, Oracle closes the control files of the database.

Shut Down an Instance

The final step in database shutdown is shutting down the instance. When you shut down an instance, the SGA is removed from memory and the background processes are terminated.

Abnormal Instance Shutdown

In unusual circumstances, shutdown of an instance might not occur cleanly; all memory structures might not be removed from memory or one of the background processes might not be terminated. When remnants of a previous instance exist, a subsequent instance startup most likely will fail. In such situations, the database administrator can force the new instance to start up by first removing the remnants of the previous instance and then starting a new instance, or by issuing a SHUTDOWN ABORT statement in SQL*Plus or using Enterprise Manager.

Database and Instance Shutdown

The three steps to shutting down a database and its associated instance are:

  1. Close the database.

  2. Unmount the database.

  3. Shut down the instance.

A database administrator can perform these steps using Enterprise Manager. Oracle automatically performs all three steps whenever an instance is shut down.

What Happens When You Open a Database

Opening a mounted database makes it available for normal database operations. Any valid user can connect to an open database and access its information. Usually, a database administrator opens the database to make it available for general use.

When you open the database, Oracle opens the online datafiles and redo log files. If a tablespace was offline when the database was previously shut down, the tablespace and its corresponding datafiles will still be offline when you reopen the database.

If any of the datafiles or redo log files are not present when you attempt to open the database, then Oracle returns an error. You must perform recovery on a backup of any damaged or missing files before you can open the database.

How to Identify Your Oracle Database Software Release

Because Oracle Database continues to evolve and can require maintenance, Oracle periodically produces new releases. Not all customers initially subscribe to a new release or require specific maintenance for their existing release. As a result, multiple releases of the product exist simultaneously.

As many as five numbers may be required to fully identify a release. The significance of these numbers is discussed in the sections that follow.

Release Number Format

To understand the release nomenclature used by Oracle, examine the following example of an Oracle Database server labeled "Release 10.1.0.1.0".

Figure 1-1 Example of an Oracle Database Release Number

Description of Figure 1-1 follows

What is block change tracking in RMAN Oracle 10g

Oracle Database 10g RMAN implements incremental backups in a manner that disposes of the objection " RMAN scans all the data blocks to identify candidates for backup. This process puts so much stress on the system that doing incrementals becomes impractical" . It uses a file, analogous to journals in filesystems, to track the blocks that have changed since the last backup. RMAN reads this file to determine which blocks are to be backed up.

You can enable this tracking mechanism by issuing the following command:

SQL> alter database enable block change tracking using file '/rman_bkups/change.log';

This command creates a binary file called /rman_bkups/change.log for tracking purposes. Conversely, you can disable tracking with

SQL> alter database disable block change tracking;

To see whether change tracking is currently enabled, you can query:

SQL> select filename, status from v$block_change_tracking;

What is Incremental Merge in RMAN 10g?

Let's say you have the following backup schedule:

Sunday - Level 0 (full), with tag level_0
Monday - Level 1 (incremental) with tag level_1_mon
Tuesday - Level 1 (incremental) with tag level_1_tue

and so on. If the database fails on Saturday, prior to 10g you would have had to restore the tag level_0 and then apply all six incrementals. It would have taken a long time, which is another reason many DBAs shun incremental backups.

Oracle Database 10g RMAN radically changes that equation. Now, your incremental backup command looks like this:

RMAN> backup incremental level_1 for recover of copy with tag level_0 database;

Here we have instructed RMAN to make an incremental level_1 backup and merge that with the full backup copy with the tag level_0. After this command, level_0 becomes a full backup of that day.

So, on Tuesday, the backup with tag level_0, when merged with incremental level_1 backup, becomes identical to the full Tuesday backup. Similarly, the incremental taken on Saturday, when applied to the backup on disk, will be equivalent to a full level_0 Saturday backup. If the database fails on Saturday, you just need to restore the level_0 backup plus a few archive logs to bring the database into a consistent state; there is no need to apply additional incrementals. This approach cuts down recovery time dramatically, speeds backup, and eliminates the need to make a full database backup again.

Saturday, September 20, 2008

Moving from DBMS_JOB to DBMS_SCHEDULER

Removing a Job from the Job Queue

The following example removes a job using DBMS_JOB, where 14144 is the number of the job being run:

BEGIN
DBMS_JOB.REMOVE(14144);
COMMIT;
END;
/


Using DBMS_SCHEDULER, you would issue the following statement instead:

BEGIN
DBMS_SCHEDULER.DROP_JOB('myjob1');
END;
/

Moving from DBMS_JOB to DBMS_SCHEDULER

Altering a Job

An example of altering a job using DBMS_JOB is the following:

BEGIN
DBMS_JOB.WHAT(31, 'INSERT INTO employees VALUES (7935, ''TOM'', ''DOGAN'',
''tom.dogan@xyzcorp.com'', NULL, SYSDATE,''AD_PRES'', NULL,
NULL, NULL, NULL);');
COMMIT;
END;
/


This changes the action for job 31 to insert a different value. An equivalent statement using DBMS_SCHEDULER is the following:

BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE(
name => 'JOB1',
attribute => 'job_action',
value => 'INSERT INTO employees VALUES (7935, ''TOM'', ''DOGAN'',
''tom.dogan@xyzcorp.com'', NULL, SYSDATE, ''AD_PRES'', NULL,
NULL, NULL, NULL);');
END;
/

Moving from DBMS_JOB to DBMS_SCHEDULER

Creating a Job

An example of creating a job using DBMS_JOB is the following:

VARIABLE jobno NUMBER;
BEGIN
DBMS_JOB.SUBMIT(:jobno, 'INSERT INTO employees VALUES (7935, ''SALLY'',
''DOGAN'', ''sally.dogan@xyzcorp.com'', NULL, SYSDATE, ''AD_PRES'', NULL,
NULL, NULL, NULL);', SYSDATE, 'SYSDATE+1');
COMMIT;
END;
/


An equivalent statement using DBMS_SCHEDULER is the following:

BEGIN
DBMS_SCHEDULER.CREATE_JOB(
job_name => 'job1',
job_type => 'PLSQL_BLOCK',
job_action => 'INSERT INTO employees VALUES (7935, ''SALLY'',
''DOGAN'', ''sally.dogan@xyzcorp.com'', NULL, SYSDATE,''AD_PRES'', NULL,
NULL, NULL, NULL);');
start_date => SYSDATE,
repeat_interval => 'FREQ = DAILY; INTERVAL = 1');
END;
/

Moving from DBMS_JOB to DBMS_SCHEDULER

Moving from DBMS_JOB to DBMS_SCHEDULER

Oracle Database provides advanced scheduling capabilities through the database Scheduler, a collection of functions and procedures in the DBMS_SCHEDULER package. The Scheduler offers far more functionality than the DBMS_JOB package, which was the previous Oracle Database job scheduler. In our next few posting we will discusses how you can take statements created with DBMS_JOB and rewrite them using DBMS_SCHEDULER.

Oracle Database 10g Scheduler - Associating Jobs with Programs

Associating Jobs with Programs

In the above case, you created a job independently of any program. Now let's create one that refers to an operating system utility or program, a schedule to specify how many times something should run, and then join the two to create a job.

First you need to make the database aware that your script is a program to be used in a job. To create this program, you must have the CREATE JOB privilege.

begin
dbms_scheduler.create_program
(
program_name => 'MOVE_ARCS',
program_type => 'EXECUTABLE',
program_action => '/home/arup/dbtools/move_arcs.sh',
enabled => TRUE,
comments => 'Moving Archived Logs to Staging Directory'
);
end;
/

Here you have created a named program unit, specified it as an executable, and noted what the program unit is called.

Next, you will create a named schedule to be run every 30 minutes called EVERY_30_MINS. You would do that with:

begin
dbms_scheduler.create_schedule
(
schedule_name => 'EVERY_30_MINS',
repeat_interval => 'FREQ=MINUTELY; INTERVAL=30',
comments => 'Every 30-mins'
);
end;
/

Now that the program and schedule are created, you will associate the program to the schedule to create a job.

begin
dbms_scheduler.create_job
(
job_name => 'ARC_MOVE',
program_name => 'MOVE_ARCS',
schedule_name => 'EVERY_30_MINS',
comments => 'Move Archived Logs to a Different Directory',
enabled => TRUE
);
end;
/

This will create a job to be run every 30 minutes that executes the shell script move_arcs.sh. It will be handled by the Scheduler feature inside the database—no need for cron or the AT utility.

Oracle Database 10g Scheduler - Creating Jobs Without Programs

Creating Jobs Without Programs

Perhaps the concept can be best introduced through examples. Suppose you have created a shell script to move archived log files to a different filesystem as follows:

/home/arup/dbtools/move_arcs.sh

We can specify the OS executable directly without creating it as a program first.

begin
dbms_scheduler.create_job
(
job_name => 'ARC_MOVE_2',
schedule_name => 'EVERY_30_MINS',
job_type => 'EXECUTABLE',
job_action => '/home/arup/dbtools/move_arcs.sh',
enabled => true,
comments => 'Move Archived Logs to a Different Directory'
);
end;
/

Similarly, you can create a job without a named schedule.

begin
dbms_scheduler.create_job
(
job_name => 'ARC_MOVE_3',
job_type => 'EXECUTABLE',
job_action => '/home/arup/dbtools/move_arcs.sh',
repeat_interval => 'FREQ=MINUTELY; INTERVAL=30',
enabled => true,
comments => 'Move Archived Logs to a Different Directory'
);
end;
/

One advantage of Scheduler over dbms_job is pretty clear from our initial example: the ability to call OS utilities and programs, not just PL/SQL program units. This ability makes it the most comprehensive job management tool for managing Oracle Database and related jobs. However, you may have noted another, equally important advantage: the ability to define intervals in natural language. Note that in the above example we wanted our schedule to run every 30 minutes; hence the parameter REPEAT_INTERVAL is defined with a simple, English-like expression (not a PL/SQL one) :

'FREQ=MINUTELY; INTERVAL=30'

A more complex example may help convey this advantage even better. Suppose your production applications become most active at 7:00AM and 3:00PM. To collect system statistics, you want to run Statspack from Monday to Friday at 7:00AM and 3:00PM only. If you use DBMS_JOB.SUBMIT to create a job, the NEXT_DATE parameter will look something like this:

DECODE
(
SIGN
(
15 - TO_CHAR(SYSDATE,'HH24')
),
1,
TRUNC(SYSDATE)+15/24,
TRUNC
(
SYSDATE +
DECODE
(
TO_CHAR(SYSDATE,'D'), 6, 3, 1
)
)
+7/24
)

Is that code easy to understand? Not really.

Now let's see the equivalent job in DBMS_SCHEDULER. The parameter REPEAT_INTERVAL will be as simple as:

'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=7,15'

Furthermore, this parameter value can accept a variety of intervals, some of them very powerful. Here are some more examples:

* Last Sunday of every month:

FREQ=MONTHLY; BYDAY=-1SUN

* Every third Friday of the month:

FREQ=MONTHLY; BYDAY=3FRI

* Every second Friday from the end of the month, not from the beginning:

FREQ=MONTHLY; BYDAY=-2FRI

The minus signs before the numbers indicate counting from the end, instead of the beginning.

What if you wanted to verify if the interval settings are correct? Wouldn't it be nice to see the various dates constructed from the calendar string? Well, you can get a preview of the calculation of next dates using the EVALUATE_CALENDAR_STRING procedure. Using the first example—running Statspack every day from Monday through Friday at 7:00AM and 3:00PM—you can check the accuracy of your interval string as follows:

set serveroutput on size 999999

declare
L_start_date TIMESTAMP;
l_next_date TIMESTAMP;
l_return_date TIMESTAMP;
begin
l_start_date := trunc(SYSTIMESTAMP);
l_return_date := l_start_date;
for ctr in 1..10 loop
dbms_scheduler.evaluate_calendar_string(
'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=7,15',
l_start_date, l_return_date, l_next_date
);
dbms_output.put_line('Next Run on: ' ||
to_char(l_next_date,'mm/dd/yyyy hh24:mi:ss')
);
l_return_date := l_next_date;
end loop;
end;
/

The output is:

Next Run on: 03/22/2004 07:00:00
Next Run on: 03/22/2004 15:00:00
Next Run on: 03/23/2004 07:00:00
Next Run on: 03/23/2004 15:00:00
Next Run on: 03/24/2004 07:00:00
Next Run on: 03/24/2004 15:00:00
Next Run on: 03/25/2004 07:00:00
Next Run on: 03/25/2004 15:00:00
Next Run on: 03/26/2004 07:00:00
Next Run on: 03/26/2004 15:00:00

This confirms that your settings are correct.

Oracle Database 10g Scheduler

The Oracle Database 10g Scheduler feature offers the best of all worlds: a job scheduler utility right inside the database that is sufficiently powerful to handle all types of jobs, not just PL/SQL code segments, and that can help you create jobs either with or without associated programs and/or schedules. Best of all, it comes with the database at no additional cost. In this installment, we'll take a look at how it works.

DBMS_JOB Package

Some of you may use the dbms_job package extensively to submit database jobs to be run in the background, control the time or interval of a run, report failures, and much more.

The problem with the package is that it can handle only PL/SQL code segments—just anonymous blocks and stored program units. It cannot handle anything outside the database that is in an operating system command file or executable. To do so, you would have to resort to using an operating system scheduling utility such as cron in Unix or the AT command in Windows. Or, you could use a third-party tool, one that may even extend this functionality by providing a graphical user interface.

Even so, dbms_job has a distinct advantage over these alternatives: it is active only when the database is up and running. If the database is down, the jobs don't run. A tool outside the database must manually check if the database is up—and that can be difficult. Another advantage is that dbms_job is internal to the database; hence you can access it via a database access utility such as SQL*Plus.

Data Guard Database Synchronization Options

Data Guard Database Synchronization Options

Data Guard can be configured to run with varying synchronization modes indicating the potential for data loss:

* No-Data-Loss mode :
This simply means that the log transport services will not acknowledge modifications to the primary database until they are available to the standby database. This doesn't mean that the modifications have been applied to the standby database, merely that the log information is available to the log apply services should failover occur. This mode is implemented using standby redo logs on the standby server.

* No-Data-Divergence mode :
This is an extension of the no-data-loss mode whereby modifications to the primary database are prevented if conectivity between the primary and at least one standby database is unavailable.

* Minimal-Data-Loss mode : When the performance requirements of the primary database are the top priority this mode provides the optimum balance of data protection and performance.

Data Guard Role Management

Role Management

Using Data Guard, the role of a database can be switched from a primary role to a standby role and vice versa, ensuring no data loss in the process, and minimizing downtime.

There are two kinds of role transitions - a switchover and a failover.

A switchover is a role reversal between the primary database and one of its standby databases. This is typically done for planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role and the standby database transitions to the primary role. The transition occurs without having to re-create either database.

A failover is a transition of a standby database to the primary role following a sudden outage of the primary database. The failed primary can be reinstated as a standby database for the new primary using Oracle Flashback Database. This can eliminate the need to recreate the failed primary from a backup, dramatically reducing the time and effort required to return the configuration to a protected state.
Administrators have the option of executing failovers manually, or Data Guard can be configured to automatically detect primary database failures and execute a failover to the standby

Data Guard Broker

Data Guard Broker

The Oracle Data Guard Broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. All management operations can be performed either through Oracle Enterprise Manager, which uses the Broker, or through the Broker's specialized command-line interface (DGMGRL). Data Guard Broker 11g also enables automatic database failover for Data Guard configurations using either Maximum Availability or Maximum Performance modes.

Data Guard Broker

* Enables automatic database failover for configurations using either Maximum Availability or Maximum Performance mode.
* Enables configurable events to trigger immediate automatic failover to a target standby database.
* Improved support for redo transport options, enabling an administrator to specify a connect description for Redo Transport Services.
* Elimination of database downtime when changing the protection mode to and from Maximum Availability and Maximum Performance.
* Support for single instance databases configured for HA using Oracle Clusterware and cold failover clusters.

Data Guard Protection Modes

Data Guard Protection Modes

In some situations, a business cannot afford to lose data at any cost. In other situations, some applications require maximum database performance and can tolerate a potential loss of data. Data Guard provides three distinct modes of data protection to satisfy these varied requirements:

*Maximum Protection—This mode offers the highest level of data protection. Data is synchronously transmitted to the standby database from the primary database and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode. If the last standby database configured in this mode becomes unavailable, processing stops on the primary database. This mode ensures no-data-loss, even in the event of multiple failures.

*Maximum Availability—
This mode is similar to the maximum protection mode, including zero data loss. However, if a standby database becomes unavailable (for example, because of network connectivity problems), processing continues on the primary database. When the fault is corrected, the standby database is automatically resynchronized with the primary database. This mode achieves no-data-loss in the event of a single failure (e.g. network failure, primary site failure . . .)

*Maximum Performance—This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode. In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database. The commit operation of the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database. If any standby destination becomes unavailable, processing continues on the primary database and there is little effect on primary database performance.

Redo Apply and SQL Apply

Redo Apply and SQL Apply

A standby database is initially created from a backup copy of the primary database. Once created, Data Guard automatically maintains the standby database as a synchronized copy of the primary database by transmitting primary database redo data to the standby system and then applying the redo data to the standby database.

Data Guard provides two methods to apply this redo data to the standby database and keep it synchronized with the primary, and these methods correspond to the two types of standby databases supported by Data Guard.

* Redo Apply, used for physical standby databases
* SQL Apply, used for logical standby databases

A physical standby database provides a physically identical copy of the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. The Redo Apply technology applies redo data on the physical standby database using standard Oracle media recovery techniques. In addition to traditional Data Guard functionality, the Active Data Guard Option for Oracle Database 11g enables a physical standby database to be open read-only while it applies updates received from the primary database. This makes physical standby databases useful for offloading the primary database from the overhead of processing read-only queries and reports. This also makes it simple to validate that the standby database is synchronized with the primary database at all times.

A logical standby database contains the same logical information as the production database, although the physical organization and structure of the data can be different. The SQL apply technology keeps the logical standby database synchronized with the primary database by transforming redo data received from the primary database into SQL statements and then executing the SQL statements on the standby database. This makes it possible for the logical standby database to be open read-write and accessed for queries and reporting purposes at the same time the SQL is being applied to it.

Data Guard Benefits

Data Guard Benefits

1. Disaster recovery and high availability

Data Guard provides an efficient and comprehensive disaster recovery and high availability solution. Automatic failover and easy-to-manage switchover capabilities allow quick role reversals between primary and standby databases, minimizing the downtime of the primary database for planned and unplanned outages.

2. Complete data protection

A standby database also provides an effective safeguard against data corruptions and user errors. Storage level physical corruptions on the primary database do not propagate to the standby database. Similarly, logical corruptions or user errors that cause the primary database to be permanently damaged can be resolved. Finally, the redo data is validated at the time it is received at the standby database and further when applied to the standby database.

3. Efficient utilization of system resources

A physical standby database can be used for backups and read-only reporting, thereby reducing the primary database workload and saving valuable CPU and I/O cycles. A physical standby database can also be easily converted back and forth between being a physical standby database and an open read/write database, without compromising data protection. A logical standby database enables read-write access to a synchronized standby database, and/or adding local tables to the standby database that can also be updated, and/or creating additional indexes to optimize read performance.

4. Flexibility in data protection to balance availability against performance requirements

Oracle Data Guard offers the maximum protection, maximum availability, and maximum performance modes to help enterprises balance data availability against system performance requirements.

5. Protection from communication failures

If network connectivity is lost between the primary and one or more standby databases, redo data cannot be sent from the primary to those standby databases. Once connectivity is re-established, the missing redo data is automatically detected by Data Guard and the necessary archive logs are automatically transmitted to the standby databases. The standby databases are resynchronized with the primary database, with no manual intervention by the administrator.

6. Centralized and simple management

Data Guard Broker automates the management and monitoring tasks across the multiple databases in a Data Guard configuration. Administrators may use either Oracle Enterprise Manager or the Broker’s own specialized command-line interface (DGMGRL) to take advantage of this integrated management framework.

7. Integrated with Oracle database

Data Guard is available as an integrated feature of the Oracle Database (Enterprise Edition) at no extra cost.

Creating a Nested Materialized View: Example

Creating a Nested Materialized View: Example

The following example uses the materialized view from the preceding example as a master table to create a materialized view tailored for a particular sales representative in the sample oe schema:

CREATE MATERIALIZED VIEW my_warranty_orders
AS SELECT w.order_id, w.line_item_id, o.order_date
FROM warranty_orders w, orders o
WHERE o.order_id = o.order_id
AND o.sales_rep_id = 165;

Creating a Fast Refreshable Materialized View: Example

Creating a Fast Refreshable Materialized View: Example

The following statement creates a fast-refreshable materialized view that selects columns from the order_items table in the sample oe schema, using the UNION set operator to restrict the rows returned from the product_information and inventories tables using WHERE conditions. The materialized view logs for order_items and product_information were created in the "Examples " section of CREATE MATERIALIZED VIEW LOG. This example also requires a materialized view log on oe.inventories.

CREATE MATERIALIZED VIEW LOG ON inventories
WITH (quantity_on_hand);

CREATE MATERIALIZED VIEW warranty_orders REFRESH FAST AS
SELECT order_id, line_item_id, product_id FROM order_items o
WHERE EXISTS
(SELECT * FROM inventories i WHERE o.product_id = i.product_id
AND i.quantity_on_hand IS NOT NULL)
UNION
SELECT order_id, line_item_id, product_id FROM order_items
WHERE quantity > 5;


This materialized view requires that materialized view logs be defined on order_items (with product_id as a join column) and on inventories (with quantity_on_hand as a filter column).

Automatic Refresh Times for Materialized Views: Example

Automatic Refresh Times for Materialized Views: Example

The following statement creates the complex materialized view all_customers that queries the employee tables on the remote and local databases:

CREATE MATERIALIZED VIEW all_customers
PCTFREE 5 PCTUSED 60
TABLESPACE example
STORAGE (INITIAL 50K NEXT 50K)
USING INDEX STORAGE (INITIAL 25K NEXT 25K)
REFRESH START WITH ROUND(SYSDATE + 1) + 11/24
NEXT NEXT_DAY(TRUNC(SYSDATE), 'MONDAY') + 15/24
AS SELECT * FROM sh.customers@remote
UNION
SELECT * FROM sh.customers@local;



Oracle Database automatically refreshes this materialized view tomorrow at 11:00 a.m. and subsequently every Monday at 3:00 p.m. The default refresh method is FORCE. The defining query contains a UNION operator, which is not supported for fast refresh, so the database will automatically perform a complete refresh.

The preceding statement also establishes storage characteristics for both the materialized view and the index that the database uses to maintain it:

*The first STORAGE clause establishes the sizes of the first and second extents of the materialized view as 50 kilobytes each.
*The second STORAGE clause, appearing with the USING INDEX clause, establishes the sizes of the first and second extents of the index as 25 kilobytes each.

Periodic Refresh of Materialized Views: Example

Periodic Refresh of Materialized Views: Example

The following statement creates the primary key materialized view emp_data and populates it with data from the sample table hr.employees:

CREATE MATERIALIZED VIEW LOG ON employees
WITH PRIMARY KEY
INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW emp_data
PCTFREE 5 PCTUSED 60
TABLESPACE example
STORAGE (INITIAL 50K NEXT 50K)
REFRESH FAST NEXT sysdate + 7
AS SELECT * FROM employees;


The statement does not include a START WITH parameter, so Oracle Database determines the first automatic refresh time by evaluating the NEXT value using the current SYSDATE. A materialized view log was created for the employee table, so Oracle Database performs a fast refresh of the materialized view every 7 days, beginning 7 days after the materialized view is created.

Because the materialized view conforms to the conditions for fast refresh, the database will perform a fast refresh. The preceding statement also establishes storage characteristics that the database uses to maintain the materialized view.

Creating Rowid Materialized Views: Example

Creating Rowid Materialized Views: Example

The following statement creates a rowid materialized view on the sample table oe.orders:

CREATE MATERIALIZED VIEW order_data REFRESH WITH ROWID
AS SELECT * FROM orders;

Creating Primary Key Materialized Views: Example

Creating Primary Key Materialized Views: Example

The following statement creates the primary key materialized view catalog on the sample table oe.product_information:

CREATE MATERIALIZED VIEW catalog
REFRESH FAST START WITH SYSDATE NEXT SYSDATE + 1/4096
WITH PRIMARY KEY
AS SELECT * FROM product_information;

Creating Subquery Materialized Views: Example

Creating Subquery Materialized Views: Example

The following statement creates a subquery materialized view based on the customers and countries tables in the sh schema at the remote database:

CREATE MATERIALIZED VIEW foreign_customers FOR UPDATE
AS SELECT * FROM sh.customers@remote cu
WHERE EXISTS
(SELECT * FROM sh.countries@remote co
WHERE co.country_id = cu.country_id);

Creating Prebuilt Materialized Views: Example

Creating Prebuilt Materialized Views: Example

The following statement creates a materialized aggregate view for the preexisting summary table, sales_sum_table:

CREATE TABLE sales_sum_table
(month VARCHAR2(8), state VARCHAR2(40), sales NUMBER(10,2));

CREATE MATERIALIZED VIEW sales_sum_table
ON PREBUILT TABLE WITH REDUCED PRECISION
ENABLE QUERY REWRITE
AS SELECT t.calendar_month_desc AS month,
c.cust_state_province AS state,
SUM(s.amount_sold) AS sales
FROM times t, customers c, sales s
WHERE s.time_id = t.time_id AND s.cust_id = c.cust_id
GROUP BY t.calendar_month_desc, c.cust_state_province;


In this example, the materialized view has the same name and also has the same number of columns with the same datatypes as the prebuilt table. The WITH REDUCED PRECISION clause allows for differences between the precision of the materialized view columns and the precision of the values returned by the subquery.

Creating Materialized Join Views: Example

Creating Materialized Join Views: Example

The following statement creates and populates the materialized aggregate view sales_by_month_by_state using tables in the sample sh schema. The materialized view will be populated with data as soon as the statement executes successfully. By default, subsequent refreshes will be accomplished by reexecuting the defining query of the materialized view:

CREATE MATERIALIZED VIEW sales_by_month_by_state
TABLESPACE example
PARALLEL 4
BUILD IMMEDIATE
REFRESH COMPLETE
ENABLE QUERY REWRITE
AS SELECT t.calendar_month_desc, c.cust_state_province,
SUM(s.amount_sold) AS sum_sales
FROM times t, sales s, customers c
WHERE s.time_id = t.time_id AND s.cust_id = c.cust_id
GROUP BY t.calendar_month_desc, c.cust_state_province;

Creating Materialized Aggregate Views: Example

The following statement creates and populates a materialized aggregate view on the sample sh.sales table and specifies the default refresh method, mode, and time. It uses the materialized view log created in "Creating a Materialized View Log: Examples", as well as the two additional logs shown here:

CREATE MATERIALIZED VIEW LOG ON times
WITH ROWID, SEQUENCE (time_id, calendar_year)
INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW LOG ON products
WITH ROWID, SEQUENCE (prod_id)
INCLUDING NEW VALUES;

CREATE MATERIALIZED VIEW sales_mv
BUILD IMMEDIATE
REFRESH FAST ON COMMIT
AS SELECT t.calendar_year, p.prod_id,
SUM(s.amount_sold) AS sum_sales
FROM times t, products p, sales s
WHERE t.time_id = s.time_id AND p.prod_id = s.prod_id
GROUP BY t.calendar_year, p.prod_id;

Privilages to create a refresh-on-commit materialized view

To create a refresh-on-commit materialized view (ON COMMIT REFRESH clause), in addition to the preceding privileges, you must have the ON COMMIT REFRESH object privilege on any master tables that you do not own or you must have the ON COMMIT REFRESH system privilege.

To create the materialized view with query rewrite enabled, in addition to the preceding privileges:

*If the schema owner does not own the master tables, then the schema owner must have the GLOBAL QUERY REWRITE privilege or the QUERY REWRITE object privilege on each table outside the schema.
*If you are defining the materialized view on a prebuilt container (ON PREBUILT TABLE clause), then you must have the SELECT privilege WITH GRANT OPTION on the container table.

The user whose schema contains the materialized view must have sufficient quota in the target tablespace to store the master table and index of the materialized view or must have the UNLIMITED TABLESPACE system privilege.

When you create a materialized view, Oracle Database creates one internal table and at least one index, and may create one view, all in the schema of the materialized view. Oracle Database uses these objects to maintain the materialized view data. You must have the privileges necessary to create these objects.

Privilages to create a materialized view in another user's schema

To create a materialized view in another user's schema:

*You must have the CREATE ANY MATERIALIZED VIEW system privilege.
*The owner of the materialized view must have the CREATE TABLE system privilege. The owner must also have access to any master tables of the materialized view that the schema owner does not own (for example, if the master tables are on a remote database) and to any materialized view logs defined on those master tables, either through a SELECT object privilege on each of the tables or through the SELECT ANY TABLE system privilege.

What Privilages required to create a materialized view?

The privileges required to create a materialized view should be granted directly rather than through a role.

To create a materialized view in your own schema:

*You must have been granted the CREATE MATERIALIZED VIEW system privilege and either the CREATE TABLE or CREATE ANY TABLE system privilege.
*You must also have access to any master tables of the materialized view that you do not own, either through a SELECT object privilege on each of the tables or through the SELECT ANY TABLE system privilege.

What is materialized view?

A materialized view is a database object that contains the results of a query. The FROM clause of the query can name tables, views, and other materialized views. Collectively these objects are called master tables (a replication term) or detail tables (a data warehousing term).

The keyword SNAPSHOT is supported in place of MATERIALIZED VIEW for backward compatibility.

What is Optimizer Plan Stability?

Plan stability prevents certain database environment changes from affecting the performance characteristics of applications. Such changes include changes in optimizer statistics, changes to the optimizer mode settings, and changes to parameters affecting the sizes of memory structures, such as SORT_AREA_SIZE and BITMAP_MERGE_AREA_SIZE. Plan stability is most useful when you cannot risk any performance changes in an application.

Plan stability preserves execution plans in stored outlines. An outline is implemented as a set of optimizer hints that are associated with the SQL statement. If the use of the outline is enabled for the statement, Oracle automatically considers the stored hints and tries to generate an execution plan in accordance with those hints.

Oracle can create a public or private stored outline for one or all SQL statements. The optimizer then generates equivalent execution plans from the outlines when you enable the use of stored outlines. You can group outlines into categories and control which category of outlines Oracle uses to simplify outline administration and deployment.

The plans Oracle maintains in stored outlines remain consistent despite changes to a system's configuration or statistics. Using stored outlines also stabilizes the generated execution plan if the optimizer changes in subsequent Oracle releases.

This utility has several features:

SQL parsing and execution time is reduced because Oracle will quickly grab and execute the stored outline for the SQL.

Tuning of SQL statements can easily be made permanent without locating the source code.

SQL from third-party products (e.g., SAP, Peoplesoft) can be tuned without touching the SQL source code.

Optimizer plan stability enables you to maintain the same execution plans for the same SQL statements, regardless of changes to the database. Without optimizer plan stability, any of the following database changes may alter the execution plan for every SQL statement on your system:

1 – Re-analyzing tables
2 - Adding or deleting data from tables
3 - Modifying a table's columns, constraints, or indexes
4 - Changing the system configuration
5 - Upgrading to a new version of the optimizer.

DBMS_ADVISOR Package

The DBMS_ADVISOR package is new to Oracle 10g. It is used to provide the PL/SQL application programming interface (API), which manages all the advisors. The DBMS_ADVISOR package is also used to create and execute any advisor tasks, including SQL Access Advisor tasks. There are many procedures available for the package, as given below:

Procedure name Description
CREATE_TASK Creates a new advisor task in the repository.
SET_TASK_PARAMETER Modifies a user parameter within a task (a task can only be modified if it is in the initial state).
EXECUTE_TASK Executes the specified task (analysis task in the asked case).
GET_TASK_REPORT Creates and returns an XML report for the specified task.
DELETE_TASK Deletes the specified advisor task.
CANCEL_TASK Cancels a currently executing task.
INTERRUPT_TASK Stops the currently executing task, with normal exit.
RESUME_TASK Resumes an interrupted task.
RESET_TASK Resets a task to its original state.
UPDATE_TASK_ATTRIBUTES Changes various attributes of a task.
MARK_RECOMMENDATION Accepts, rejects, or ignores a recommendation.
GET_TASK_SCRIPT Creates a SQL*Plus-compatible SQL script of all the recommendations that are accepted from a specified task.
CREATE_FILE Creates an external file from a PL/SQL CLOB variable, used for creating scripts and reports.
QUICK_TIME Analyzes and generates recommendations for a single SQL statement.


Oracle10g introduced several intelligent utilities that produce performance tuning advices and suggestions based on historical data stored in the AWR. These utilities are called advisors. The dbms_advisor package provides PL/SQL API to access the following advisors:

The Automatic Database Diagnostic Monitor (ADDM) provides Oracle DBAs with performance data gathered by the AWR. The ADDM also identifies root causes of performance bottlenecks and generates a report containing recommendations or findings on how to improve database performance.

The SQLAccess Advisor provides analysis and recommendations about indexes and materialized views which may improve system performance.

Summary of DBMS_ADVISOR Subprograms


Subprogram
Description
Used in

ADD_SQLWKLD_REF Procedure
Adds a workload reference to an Advisor task
SQL Access Advisor only


ADD_SQLWKLD_STATEMENT Procedure
Adds a single statement to a workload
SQL Access Advisor only

ADD_STS_REF Procedure
Establishes a link between the current SQL Access Advisor task and a SQL Tuning Set
SQL Access Advisor only

CANCEL_TASK Procedure
Cancels a currently executing task operation
All Advisors

COPY_SQLWKLD_TO_STS Procedure
Copies the contents of a SQL workload object to a SQL Tuning Set
SQL Access Advisor

CREATE_FILE Procedure
Creates an external file from a PL/SQL CLOB variable, which is useful for creating scripts and reports
All Advisors

CREATE_OBJECT Procedure
Creates a new task object
All Advisors

CREATE_SQLWKLD Procedure
Creates a new workload object
SQL Access Advisor only

CREATE_TASK Procedures
Creates a new Advisor task in the repository
All Advisors

DELETE_SQLWKLD Procedure
Deletes an entire workload object
SQL Access Advisor only

DELETE_SQLWKLD_REF Procedure
Deletes an entire workload object
SQL Access Advisor only

DELETE_SQLWKLD_STATEMENT Procedures
Deletes one or more statements from a workload
SQL Access Advisor only

DELETE_STS_REF Procedure
Removes a link between the current SQL Access Advisor task and a SQL Tuning Set object
SQL Access Advisor only

DELETE_TASK Procedure
Deletes the specified task from the repository
All Advisors

EXECUTE_TASK Procedure
Executes the specified task
All Advisors

GET_REC_ATTRIBUTES Procedure
Retrieves specific recommendation attributes from a task
All Advisors

GET_TASK_REPORT Function
Creates and returns a report for the specified task
All Advisors

GET_TASK_SCRIPT Function
Creates and returns an executable SQL script of the Advisor task's recommendations in a buffer
All Advisors

IMPLEMENT_TASK Procedure
Implements the recommendations for a task
All Advisors

IMPORT_SQLWKLD_SCHEMA Procedure
Imports data into a workload from the current SQL cache
SQL Access Advisor only

IMPORT_SQLWKLD_SQLCACHE Procedure
Imports data into a workload from the current SQL cache
SQL Access Advisor only

IMPORT_SQLWKLD_STS Procedure
Imports data into a workload from a SQL Tuning Set into a SQL workload data object
SQL Access Advisor only

IMPORT_SQLWKLD_SUMADV Procedure
Imports data into a workload from the current SQL cache
SQL Access Advisor only

IMPORT_SQLWKLD_USER Procedure
Imports data into a workload from the current SQL cache
SQL Access Advisor only

INTERRUPT_TASK Procedure
Stops a currently executing task, ending its operations as it would at a normal exit, so that the recommendations are visible
All Advisors

MARK_RECOMMENDATION Procedure
Sets the annotation_status for a particular recommendation
All Advisors

QUICK_TUNE Procedure
Performs an analysis on a single SQL statement
All Advisors

RESET_TASK Procedure
Resets a task to its initial state
All Advisors

SET_DEFAULT_SQLWKLD_PARAMETER Procedures
Imports data into a workload from schema evidence
SQL Access Advisor only

SET_DEFAULT_TASK_PARAMETER Procedures
Modifies a default task parameter
All Advisors

SET_SQLWKLD_PARAMETER Procedures
Sets the value of a workload parameter
SQL Access Advisor only

SET_TASK_PARAMETER Procedure
Sets the specified task parameter value
All Advisors

TUNE_MVIEW Procedure
Shows how to decompose a materialized view into two or more materialized views or to restate the materialized view in a way that is more advantageous for fast refresh and query rewrite
SQL Access Advisor only

UPDATE_OBJECT Procedure
Updates a task object
All Advisors

UPDATE_REC_ATTRIBUTES Procedure
Updates an existing recommendation for the specified task
All Advisors

UPDATE_SQLWKLD_ATTRIBUTES Procedure
Updates a workload object
SQL Access Advisor only

UPDATE_SQLWKLD_STATEMENT Procedure
Updates one or more SQL statements in a workload
SQL Access Advisor only

UPDATE_TASK_ATTRIBUTES Procedure
Updates a task's attributes
All Advisors

What is Bigfile tablespace? why do we create it? what is the advantage of using it?

Bigfile Tablespaces
Bigfile tablespaces are tablespaces with a single large datafile. In contrast normal (smallfile) tablespaces can have several datafiles, but each is limited in size. The benefits of bigfile tablespaces are:

Creating Bigfile tables space

-- Valid statement.
CREATE BIGFILE TABLESPACE mybigtbs
DATAFILE '/u01/oracle/MYSID/mybigtbs01.dbf' SIZE 100G;

-- Valid statement.
CREATE BIGFILE TABLESPACE mybigtbs
DATAFILE '/u01/oracle/MYSID/mybigtbs01.dbf' SIZE 100G
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;

-- invalid statement.
CREATE BIGFILE TABLESPACE mybigtbs
DATAFILE '/u01/oracle/MYSID/mybigtbs01.dbf' SIZE 100G
EXTENT MANAGEMENT DICTIONARY
SEGMENT SPACE MANAGEMENT MANUAL;


* An Oracle database can have a maximum of 64,000 datafiles which limits its total capacity. By allowing tablespaces to have a single large datafile the total capacity of the database is increased. A bigfile tablespace with 8K and 32K blocks can contain 32 terabyte and 128 terabyte datafiles respectively.
* Using fewer larger datafiles allows the DB_FILES and MAXDATAFILES parameters to be reduced, saving SGA and controlfile space.
* The ALTER TABLESPACE syntax has been updated to allow operations at the tablespace level, rather than datafile level.

Typically bigfile tablespaces must be locally managed with automatic segment-space management. Exceptions to this rule include temporary tablespaces, the SYSTEM tablespace and locally managed undo tablespaces which are all allowed to have manual segment-space management.

Bigfile tablespaces are create using the CREATE BIGFILE TABLESPACE command. The first two of the following statements have the same affect as the EXTENT MANAGEMENT LOCAL and SEGMENT SPACE MANAGEMENT AUTO clauses are the default actions, while the last statement would error.

Reverting a Table to its Previous Statistics

Reverting a Table to its Previous Statistics

Usually, re-analyzing a schema and specifying a high percentage of rows for the sample size will improve performance. Unfortunately, the occasional hiccup will occur when you re-analyze tables. Sometimes the new statistics produce much worse execution plans than before. You can avoid the risk of a major screw up by using the DBMS_STATS package to save a copy of your current statistics just in case you need to restore them later. This requires the following steps:

Step 1.
Export your schema statistics to your statistics table. If you don’t already have a statistics table, you can create it using the DBMS_STATS.CREATE_STATS_TABLE procedure. The export is performed as follows:

EXECUTE SYS.DBMS_STATS.EXPORT_SCHEMA_STATS (OWNNAME=>'STRINIX', STATTAB=> 'PAR_STAT_TABLE', STATID=>'STAT_12DEC_2007');

Step 2.
Gather your new statistics:

EXECUTE SYS.DBMS_STATS.GATHER_SCHEMA_STATS (OWNNAME=>'STRINIX', ESTIMATE_PERCENT=>10, DEGREE=>4, CASCADE=>TRUE);

Step 3.
If there are problems with unsuitable execution paths being selected as a result of the new statistics, revert back to the previous statistics by loading the previous statistics from the statistics table:

EXECUTE SYS.DBMS_STATS.IMPORT_SCHEMA_STATS (OWNNAME=>'HROA', STATTAB=>'HROA_STAT_TABLE', STATID=>'START_12DEC_2007');

Manipulating Statistics Using DBMS_STATS

Manipulating Statistics Using DBMS_STATS

You can modify the individual statistic elements on a table. Perhaps this be mightbe useful where in if you want to replicate the table stats in development similar to that of production or replicate the table stats in production similar to that of development.

You can achieve this by using DBMS_STATS.SET_TABLE_STATS to modify the statistics for a table in your development or for a test database to match those in your production database. The optimizer uses the number of rows, number of blocks, and number of distinct values for a column to determine whether an index or a full table scan should be used.

The following example assumes that your symb_history table is going to have 20,00,567 rows in 9,000 blocks:

EXECUTE SYS.DBMS_STATS.SET_TABLE_STATS (OWNNAME=>'STRINIX_TEST', TABNAME=>'SYMB_HISTORY', NUMROWS=> 2000567, NUMBLKS=> 9000);

Regardless of how many rows you really have in your test database for the table SYMB_HISTORY, the cost-based optimizer will now behave as if there were 20,00,567.

The optimizer also uses the number of distinct values for each column to decide on index usage. If the number of distinct values is less than 10% of the number of rows in the table, the optimizer will usually decide to perform a full table scan in preference to using an index on the table column. Change the percentage of distinct values for a column as follows:

EXECUTE SYS.DBMS_STATS.SET_COLUMN_STATS (OWNNAME=>'STRINIX_TEST', TABNAME=>'COMP_SCRIP_MAP',COLNAME=>'SYMBOL', DISTCNT=>400);javascript:void(0)

Copying Statistics Using DBMS_STATS

Copying Statistics Using DBMS_STATS

DBMS_STATS package provides you the ability to copy statistics from one schema to another, or from one oracle database to another Oracle database, using the following procedure:

Step1:
First save the statistics to a table. Create a table to store the statistics, DBMS_STATS.CREATE_STATS_TABLE can be used to create a new table in your own schema to store the statistics of tables and indexes.

EXECUTE DBMS_STATS.CREATE_STATS_TABLE (OWNNAME=>'STRINIX', STATTAB=>'PAR_STAT_TAB');

Step 2. Populate the table with the statistics from the schema that you are copying from:

EXECUTE SYS.DBMS_STATS.EXPORT_SCHEMA_STATS (OWNNAME=>'STRINIX', STATTAB=>'PAR_STAT_TAB', STATID=> 'STAT_19DEC_2007');

Step 3. If you are copying statistics to a different database, such as from production to development, export and import that statistics table as required:

exp strinix/baba@prod file=stats tables=PAR_stat_tab
imp strinix/babat@dev file=stats tables=PAR_stat_tab

Step 4. Populate the statistics in the target schema’s dictionary. Below example show you that statistics are being loaded for the schema STRINIX_TEST from the table named PAT_STAT_TAB:

EXECUTE SYS.DBMS_STATS.IMPORT_SCHEMA_STATS (OWNNAME=>'STRINIX_TEST', STATTAB=>'PAR_STAT_TAB', STATID=>'STAT_19DEC_2007', STATOWN=> 'STRINIX');

Gathering statistics with DBMS_STAT Package

DBMS_STATS to Manage Statistics

DBMS_STATS package provides critical functionality for the cost-based optimizer, including the analyzing process extended to PL/SQL programs, allowing statistics to be modified, reverting back to previous statistics, and copying statistics from one schema (or database) to another.

By USING DBMS_STATS package you can

* You can collects statistics on tables and Indexes
* Modify Statistics on tables.
* Save statitstics at a certain point of time
* Revert back statistics to the Previously saved statistics.
* Copy Statistics from One Schema to another Schema.
* Import statistics from one database to another

Handling Errors During Backups: Example

Handling Errors During Backups: Example

By default a checksum is calculated for every block read from a datafile and stored in the backup or image copy. If you use the NOCHECKSUM option, then checksums are not calculated. If the block already contains a checksum, however, then the checksum is validated and stored in the backup. If the validation fails, then the block is marked corrupt in the backup.

The SET MAXCORRUPT FOR DATAFILE command sets how many corrupt blocks in a datafile that BACKUP will tolerate. If a datafile has more corrupt blocks than specified by the MAXCORRUPT parameter, the command terminates. If you specify the CHECK LOGICAL option, RMAN checks for logical and physical corruption.

By default, the BACKUP command terminates when it cannot access a datafile. You can specify parameters to prevent termination, as listed in the following table.
If you specify the option ... Then RMAN skips...

SKIP INACCESSIBLE



Inaccessible datafiles. A datafile is only considered inaccessible if it cannot be read. Some offline datafiles can still be read because they exist on disk. Others have been deleted or moved and so cannot be read, making them inaccessible.

SKIP OFFLINE


Offline datafiles.

SKIP READONLY


Datafiles in read-only tablespaces.

The following example uses an automatic channel to back up the database, and sets the corruption level for the datafile in the SYSTEM tablespace so that up to 10 errors will be accepted:

RMAN> RUN
{
SET MAXCORRUPT FOR DATAFILE 1 TO 10;
BACKUP DATABASE
SKIP INACCESSIBLE
SKIP READONLY
SKIP OFFLINE;
}

Backup Validation with RMAN

Backup Validation with RMAN

You can run the BACKUP ... VALIDATE command to check datafiles for physical and logical corruption, or to confirm that all database files exist in the correct locations. No backup is taken, but all specified files are scanned to verify that they can be backed up. Here is an example:

BACKUP VALIDATE DATABASE ARCHIVELOG ALL;

You cannot use the MAXCORRUPT or PROXY parameters with the VALIDATE option.

You can use the VALIDATE keyword of the BACKUP command to do the following:

* Check datafiles for physical and logical corruption
* Confirm that all database files exist and are in the correct locations

RMAN does not actually produce backup sets, but rather reads the specified files in their entirety, to determine whether they can be backed up and are not corrupted. In this sense, the BACKUP VALIDATE command is similar to the RESTORE VALIDATE command, except for backups rather than restore jobs.

If the backup validation discovers corrupt blocks, then RMAN updates the V$DATABASE_BLOCK_CORRUPTION view with rows describing the corruptions. After a corrupt block is repaired, the row identifying this block is deleted from the view.

For example, you can validate that all database files and archived redo logs can be backed up by running a command as follows:

RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL;

This form of the command would check for physical corruption. To check for logical corruption,

RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;

RMAN displays the same output that it would if it were really backing up the files. If RMAN cannot validate the backup of one or more of the files, then it displays an error message. For example, RMAN may show output similar to the following:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 08/29/2001 14:33:47
ORA-19625: error identifying file /oracle/oradata/trgt/arch/archive1_6.dbf
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3

You cannot use the MAXCORRUPT or PROXY parameters with the VALIDATE option.

Detection of Logical Block Corruption

Besides testing for media corruption, the database can also test data and index blocks for logical corruption, such as corruption of a row piece or index entry. If RMAN finds logical corruption, then it logs the block in the alert.log. If CHECK LOGICAL was used, the block is also logged in the server session trace file. By default, error checking for logical corruption is disabled.

For BACKUP commands the MAXCORRUPT parameter sets the total number of physical and logical corruptions permitted in a file. If the sum of physical and logical corruptions for a file is less than its MAXCORRUPT setting, the RMAN command completes successfully. If MAXCORRUPT is exceeded, the command terminates and RMAN does not read the rest of the file. V$DATABASE_BLOCK_CORRUPTION is populated with corrupt block ranges if the command succeeds. Otherwise, you must set MAXCORRUPT higher and re-run the backup to find out the corrupt block ranges.

Detecting Physical and Logical Block Corruption

Because an database server session is performing the backups and has a great understanding of files being backed up or copied, the server session is able to detect many types of physically corrupt blocks during the backup process. Each new corrupt block not previously encountered in a backup is recorded in the control file and in the alert.log. By default, error checking for physical corruption is enabled.

At the end of a backup, RMAN stores the corruption information in the recovery catalog and control file. Access this data using the V$DATABASE_BLOCK_CORRUPTION view.

If the server session encounters a datafile block during a backup that has already been identified as corrupt by the database, then the server session copies the corrupt block into the backup and the corrupt block is recorded the control file as either a logical or media corruption. RMAN copies the block in case the user wants to try to salvage the contents of the block.

If RMAN encounters a datafile block that has media corruption but that has not already been identified as corrupt by the database, then it writes the block to the backup with a reformatted header indicating that the block has media corruption.

Tests and Integrity Checks for Backups?

The database prevents operations that result in unusable backup files or corrupt restored datafiles. The database server automatically does the following:

* Blocks access to datafiles while they are being restored or recovered
* Allows only one restore operation for each datafile at a time
* Ensures that incremental backups are applied in the correct order
* Stores information in backup files to allow detection of corruption

You can use the BACKUP VALIDATE and RESTORE VALIDATE commands to test backup and restore operations without creating output files. In this way, you can check your datafiles for possible problems. If you run RMAN with the following configuration, then it detects all types of corruption that are possible to detect:

* In the initialization parameter file, set DB_BLOCK_CHECKSUM=TRUE
* In the RMAN BACKUP and RESTORE commands, do not specify the MAXCORRUPT option, do not specify the NOCHECKSUM option, but do specify the CHECK LOGICAL option

When RMAN Performs Control File Autobackups

When RMAN Performs Control File Autobackups

By default, control file autobackups are turned off, and no control file autobackups are performed. If CONFIGURE CONTROLFILE AUTOBACKUP is ON, then RMAN automatically backs up the control file and the current server parameter file (if used to start up the database) in one of two circumstances: when a successful backup must be recorded in the RMAN repository, and when a structural change to the database affects the contents of the control file which therefore must be backed up.
Control File Autobackups After Backup Acivities

This means that the control file is backed up in the following situations:

* After every BACKUP command issued at the RMAN prompt.
* At the end of a RUN block, if the last command in the block was BACKUP.
* Whenever a BACKUP command within a RUN block is followed by a command that is not BACKUP.

The first channel allocated during the backup job creates the autobackup and places it into its own backup set. The control file autobackup may be written to disk or tape.
Control File Autobackups After Database Structural Changes

The control file is also automatically backed up after database structural changes such as adding a new tablespace, altering the state of a tablespace or datafile (for example, bringing it online), adding a new online redo log, renaming a file, adding a new redo thread, and so on. Losing this information would compromise your ability to recover the database.

This backup is performed by the server process itself, rather than one of the RMAN channels. This type of autobackup, unlike autobackups that occur after a successful backup, is always created on disk. You can use CONFIGURE CONTROLFILE AUTOBACKUP FOR DEVICE TYPE DISK to set the location for this disk based control file autobackup. Note that a failure of the automatic control file autobackup after a structural change never causes the associated structural change to fail. For example, if you add a datafile, and if the resulting control file autobackup fails, then the datafile addition is still successful.

How RMAN Performs Control File Autobackups?

How RMAN Performs Control File Autobackups

The first channel allocated during the backup job creates the autobackup and places it into its own backup set; for autobackups after database structural changes, the default disk channel makes the backup. If a server parameter file is used, it is backed up in the same backup set as the control file during a control file autobackup.

After the control file autobackup completes, the database writes a message containing the complete path of the backup piece and the device type to the alert log.

The RMAN behavior when the BACKUP command includes datafile 1 depends on the CONFIGURE CONTROLFILE AUTOBACKUP setting. If control file autobackups are ON and the backup includes datafile 1, RMAN writes the control file and SPFILE to a separate autobackup backup set. If control file autobackups are OFF and the backup includes datafile 1, then RMAN includes the current control file and SPFILE in the same backup set as the datafiles.

The control file autobackup filename has a default format of %F for all device types, so that RMAN can guess the file location and restore it without a repository. The substitution variable %F is defined in the description of the CONFIGURE command in Oracle Database Backup and Recovery Basics. You can specify a different format with the CONFIGURE CONTROLFILE AUTOBACKUP FORMAT command. All autobackup formats must include the %F variable.

The SET CONTROLFILE AUTOBACKUP FORMAT command, which you can specify either within a RUN block or at the RMAN prompt, overrides the configured autobackup format in the session only. The order of precedence is:

1. SET within a RUN block
2. SET at RMAN prompt
3. CONFIGURE CONTROLFILE AUTOBACKUP FORMAT

You can configure the autobackup format even when CONFIGURE CONTROLFILE AUTOBACKUP is set to OFF, but RMAN does not generate autobackups in this case. For RMAN to make autobackups, you must set CONFIGURE CONTROLFILE AUTOBACKUP to ON.

what command do you execute to ensure that the control file is automatically backed up using Recovery Manager (RMAN)?

what command do you execute to ensure that the control file is automatically backed up using Recovery Manager?

The automatic backup of the control file occurs independently of any backup of the current control file explicitly requested as part of your backup command. You can turn the autobackup feature on or off by running the following commands:

CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP OFF;

Which background processes coordinates the rebalancing of disks in ASM

In addition to the normal database background processes like CKPT, DBWR, LGWR, SMON, and PMON, an ASM instance uses at least two additional background processes to manage data storage operations. The Rebalancer process, RBAL, coordinates the rebalance activity for ASM disk groups, and the Actual ReBalance processes, ARBn, handle the actual rebalance of data extent movements. There are usually several ARB background processes (ARB0, ARB1, and so forth). I will talk more about disk rebalancing operations in the final article in this series.

Which background processes coordinates the rebalancing of disks in ASM

What is oracle ASM in Oracle 10g? How do we create ASM instance?

Automatic Storage Management (ASM) is a new feature that was introduced in Oracle 10g to simplify the storage of Oracle datafiles, controlfiles and logfiles.

Automatic Storage Management (ASM) simplifies administration of Oracle related files by allowing the administrator to reference disk groups rather than individual disks and files, which are managed by ASM. The ASM functionality is an extention of the Oracle Managed Files (OMF) functionality that also includes striping and mirroring to provide balanced and secure storage. The new ASM functionality can be used in combination with existing raw and cooked file systems, along with OMF and manually managed files.

ASM provides the following functionality:

* Manages groups of disks, called disk groups.
* Manages disk redundancy within a disk group.
* Provides near-optimal I/O balancing without any manual tuning.
* Enables management of database objects without specifying mount points and filenames.
* Supports large files.


Creating ASM instance

Step1. To create an ASM instance first create a file called initASM.ora.
INSTANCE_TYPE=ASM

The initialization parameters that are of specific interest for an ASM instance are:

INSTANCE_TYPE - Set to ASM or RDBMS depending on the instance type. The default is RDBMS.
DB_UNIQUE_NAME - Specifies a globally unique name for the database. This defaults to +ASM but must be altered if you intend to run multiple ASM instances.
ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance. The valid values range from 1 to 11, with 1 being the default. The higher the limit the more resources are allocated resulting in faster rebalancing operations. This value is also used as the default when the POWER clause is omitted from a rebalance operation.
ASM_DISKGROUPS - The list of disk groups that should be mounted by an ASM instance during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM configuration changes are automatically reflected in this parameter.
ASM_DISKSTRING - Specifies a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed of disk group mount time and the speed of adding a disk to a disk group. Changing the parameter to a value which prevents the discovery of already mounted disks results in an error. The default value is NULL allowing all suitable disks to be considered.

Step 2: using SQL*Plus connect to the idle instance.

export ORACLE_SID=+ASM
sqlplus / as sysdba

Step3: Create an spfile using the contents of the initASM.ora file.

SQL> CREATE SPFILE FROM PFILE='initASM.ora';

File created.
Step 4: Start the instance with the NOMOUNT option.

SQL> startup nomount
ASM instance started

The ASM instance is now ready to use for creating and mounting disk groups. To shutdown the ASM instance issue the following command.

SQL> shutdown
ASM instance shutdown
SQL>

Startup and Shutdown of ASM Instances
ASM instance are started and stopped in a similar way to normal database instances. The options for the STARTUP command are:

* FORCE - Performs a SHUTDOWN ABORT before restarting the ASM instance.
* MOUNT - Starts the ASM instance and mounts the disk groups specified by the ASM_DISKGROUPS parameter.
* NOMOUNT - Starts the ASM instance without mounting any disk groups.
* OPEN - This is not a valid option for an ASM instance.

The options for the SHUTDOWN command are:

* NORMAL - The ASM instance waits for all connected ASM instances and SQL sessions to exit then shuts down.
* IMMEDIATE - The ASM instance waits for any SQL transactions to complete then shuts down. It doesn't wait for sessions to exit.
* TRANSACTIONAL - Same as IMMEDIATE.
* ABORT - The ASM instance shuts down instantly.

Load Balancing In Oracle 10g Real Application Clusters Environments

An Oracle 10gR2 Real Application Clusters (RAC) clustered database needs more robust load balancing capabilities because of the nature of that environment.

A RAC clustered database comprises at least two (and usually many more) nodes, each running a separate instance of the clustered database. In addition, a RAC database usually needs to supply a minimum amount of connections and resources to several applications, each with dramatically different resource needs depending on the current business processing cycle(s), so the application load that’s placed on each instance in the clustered database therefore can be dramatically different at different times of the day, week, month, and year. Finally, it’s likely that a RAC clustered database will need to guarantee a minimum cardinality (i.e. a specific number of nodes on which the application needs to run at all times) to one or more mission-critical applications.

RAC Services. Starting in Oracle 8i, an Oracle database could dynamically register a database directly with its corresponding listener(s) based on the settings for the SERVICE_NAMES initialization parameter through the database’s Process Monitor (PMON) background process. To completely support this feature, Oracle strongly suggested that the SERVICE_NAME parameter should be used instead of the original SID parameter in the TNSNAMES.ORA configuration file so that an incoming user session could immediately identify the database instance to which a session intended to connect.

Oracle 10g RAC leverages this service naming feature to distribute application connections efficiently across a RAC clustered database. For example, a clustered database may need to support three different applications, OLTP, DSS, and ADHOC. The OLTP application is the main order entry application for this enterprise computing environment, and therefore it needs a minimum cardinality of two cluster database instances at all times. The DSS application, on the other hand, supports extraction, transformation and load (ETL) operations for the enterprise’s data warehouse, and thus it requires a minimum cardinality of just one instance. Likewise, the ADHOC application supports OLAP and general user query execution against the data warehouse, but it too only requires a minimum cardinality of a single instance.
Oracle 10gR2 RAC: Server-Side Connect-Time Load Balancing

It’s also important to realize that in a RAC environment, the server-side load balancing methodology differs slightly from the methodology used in a single-instance environment because Oracle 10gR2 discriminates whether the incoming connection has been requested as either a dedicated or a shared server connection:

* If a dedicated session is requested, then the listener will select the instance first on the basis of the node that is least loaded; if all nodes are equally loaded, it will then select the instance that has the least load.
* For a shared server connection, however, the listener goes one step further. It will also check to see if all of the available instances are equally loaded; if this is true, the listener will place the connection on the least-loaded dispatcher on the selected instance.

Advanced Load Balancing: The Load Balancing Advisor (LBA)

In a RAC environment, it’s not unlikely that one node may become overwhelmed by application requests for resources. For example, let’s assume that a two-node clustered database currently has 100 user sessions connected across both nodes in the cluster, and that the database is using standard server-side load balancing. If there is a sudden “storm” of 200 additional connections, the listeners on the two nodes will simply distribute them evenly across both nodes, 100 to each node, resulting in 150 connections per node. However, it’s possible that node RACDB2 is actually much “busier” than the other node. As a result, node RACDB2 will most likely be completely overwhelmed by those new connections’ resource demands, while node RACDB1 remains relatively underutilized with plenty of additional resources.

The good news is that Oracle 10gR2 now provides an advanced method to overcome this imbalance: the Load Balancing Advisor (LBA). The LBA will calculate how much work each instance in the clustered database has been asked to do. The LBA will then make sure that any new incoming work is routed to the instance(s) that are least busy. This determination takes place on a service-by-service basis across all instances that are providing resources to each service.

The LBA considers the following factors when determining how to perform this “balancing act”:

* Are there any differences in processing power between nodes?
* Are there any sessions that are currently blocked from execution because of waits?
* Have any failures occurred on a node that might block processing from continuing?
* Are there any services that are competing for resources, and have those services been granted a different priority to complete their tasks at hand?

Not only does this help to balance out the total workload across all instances in the cluster, it also insures that one node won’t be overwhelmed by requests for service by its primary application(s).

As part of its Fast Application Notification (FAN) event transmission features, Oracle 10g RAC uses Oracle Notification Services (ONS) to communicate the status of cluster resources to all participating node applications in the cluster. In Oracle 10gR2, FAN added a new event type, SERVICE_METRIC, whose event payload contains information about the relative workload of each node in the RAC cluster, and it’s this information that the Load Balancing Advisory uses to determine how to route new connections throughout the clustered database’s instances.

Oracle 10gR2 supplies an upgraded DBMS_SERVICE.MODIFY_SERVICE procedure that modifies an existing RAC service so that it can utilize the Load Balance Advisory for service-by-service load balancing. This procedure provides two crucial parameters, GOAL and CLB_GOAL, that interact to determine what methods (if any) Oracle 10gR2 will utilize to perform load balancing. For example, if I set the GOAL parameter to either the GOAL_SERVICE_TIME or the GOAL_THROUGHPUT enumerated constants for a specific RAC service, Oracle 10gR2 will activate the Load Balancing Advisory for load balancing of that service’s incoming connections. Table 1 explains the difference between these two load balancing targets.

Oracle 10gR2 load balancing - Server side load balancing

The two previous methods will adequately handle the distribution of user sessions across available resources while helping to guarantee that no session will wait an excessive time to find a currently active address on which to connect. Clearly, a better solution was needed, and Oracle 9i offered one: server-side load balancing. This method divides the connection load evenly between all available listeners by determining the total number of connections on each listener, and then distributing new user session connection requests to the least loaded listener(s) based on the total number of sessions already connected. While a bit more complex to implement because it requires configuration of multiple listeners, it most definitely helps to even out connections across all available listeners in a database system.

To implement server-side load balancing, at least two listeners must be configured. Also, the REMOTE_LISTENERS initialization parameter must be added to the database’s PFILE or SPFILE so that the database knows to search out the value provided in that parameter in the database server’s TNSNAMES.ORA configuration file. When server-side load balancing is activated, each listener that contributes a listening endpoint communicates with the other listener(s) via each database instance’s PMON process. Oracle then determines how many user connections each listener is servicing, and it will distribute any new connection requests so that the load is balanced evenly across all servers. The entries in TNSNAMES.ORA direct the listeners to share information about the relative load connectivity.

Oracle 10gR2 load balancing- Client-Side Connection Failover

Client-Side Connection Failover. Obviously, balancing the overall number of connections is a desirable goal, but what happens if the chosen connection point is unavailable because the database server’s listener is no longer active? To forestall this, Oracle 8i provided the capability to determine if the connection that has been chosen at random is still “alive” and, if not, continue to try the other listed connection points until a live connection is found. This simple method tends to limit the chance of a lost connection, but unfortunately it must rely on TCP/IP timeouts to determine if a connection is alive or dead, and this means that an application may wait several seconds (or even longer!) before it receives a notification that the connection has been terminated.

Oracle 10gR2 load balancing - Part2 - Client-Side Load balancing

Client-Side Load Balancing Methods

1. Client-Side Connection Load Balancing.
This load balancing method has been available since Oracle 8i. When a user session attempts to connect to the database, the database’s listener will assign the session randomly to one of the listed multiple listening endpoints.

While this load balancing method is certainly simplest to implement, it also has an obvious limitation: the listener has no idea if the session has been assigned to an endpoint whose corresponding database server is already overloaded. Moreover, since the listener is essentially picking the connection completely at random, there is no guarantee that the connection chosen will even be available at that time. This may force the session to wait for a considerable length of time – perhaps even minutes, a relative eternity in computing timeframes! – until the operating system indicates to the listener that the connection is unavailable, which causes the user session to fail eventually with an ORA-01034 ORACLE not available error.

Oracle 10gR2 load balancing - Part1

Oracle 10gR2 takes advantage of simple client-side connection load balancing, server-side load balancing, and the new Load Balancing Advisor (LBA) features to significantly improve the distribution of workloads across the appropriate nodes in an Oracle 10gR2 Real Application Clusters (RAC) clustered database environment.

Friday, September 19, 2008

Oracle database tuning without any tools - using performance views - part 4 - Examine the Oracle Statistics

Examine the Oracle Statistics

Oracle provides the init.ora parameter STATISTICS_LEVEL (BASIC, TYPICAL-default, ALL), which controls all major statistics collections or advisories in the database.

you can see the statistics level from v$statistics_level view.

1. wait events

main views to look for wait events are
1. v$session
2. v$session_wait
3. v$session_wait_class
4. v$system_wait_class
5. v$session_wait_history
6. v$waitclassmetric
7. v$active_session_history
8. v$sess_time_model
9. v$session_event
10. v$system_event
11. v$event_histogram
12. v$file_histogram
13. v$temp_histogram

wait events are grouped into classes. main classes are
1. administrative
2. application
3. cluster
4. concurrency
5. commit
6. configuration
7. idle
8. network
9. other,
10. scheduler
11. system i/o
12. user i/o

Oracle database tuning without any tools - using performance views - part 3 - Examine the Host system

Examine the Host system

1. CPU usage
2. Non oracle processes running on the system
3. Oracle processes (SQL_TRACe and TKPROF to identify bad PL/SQl code or SQL query)
4. Oracle CPU statistics (look at the following v$ views v$sysstat, v$sesstat, v$rsrc_consumer_group)
5. interpreting CPU statistics
6. Detecting I/O problems
7. Detecting Network problem

Oracle database tuning without any tools - using performance views - part 2 - Define a problem

Define a problem


It is very important to understand the purpose of tuning and nature of the problem before we start implementing the solution. If we do not understand, it will be almost impossible to implement effective changes. Collect as much data as you can in this stage. following data is vital for the next step

1. Identify the performance objective: what is the measure of acceptable performance i.e. response time that will meet the required performance level.

2. Identify the scope of the problem

what is affected by the slowdown?
Is it particular application?
Is it particular program?
Is it a particular operation?
Is it a single user?

3. Identify the time frame when the problem occurs.
Is it happening in a particular time(peak time)?
Is the performance deteriorate over the course of the day?
Is the slowdown gradual or sudden?

4. Quantify the slowdown

This helps to identify the extent of the problem and also acts as a measure for comparison when deciding whether changes made to fix the problem have actually made the difference or not.

5. Identify the changes: Identify what has changed since performance was acceptable. this may narrow the potential problem very quickly. For example any change in hardware configuration, software versions, or oracle release has been upgraded, large amount of data has been loaded, data input or user volume grown etc

Oracle database tuning without any tools - using performance views - part 1

Oracle instance tuning steps:

1. Define a problem : This step involves feedback from users about the scope of the problem.

2. Examine the Host System and Examine the Oracle Statistics

a) Obtain a full set of os, database, and application statistics, examine the data for any evidence of performance problem.
b) Consider the list of common performance errors and see if data gathered contribute to that problem
c) Build a conceptual model (mental model) what is happening in the system using the performance data gathered.

3. Implement and measure the change: In this step implement proposed changes and what will be expected result after these changes. do the change and see if you are getting expected results. Measure the application performance gain.

4.determine if the performance objective defined in step 1 are met not. if not then repead step 2 and 3 until the performance goals are met.

Thursday, September 18, 2008

Quick DBA Test - Oracle 10g

As a database administrator, you are required to implement a backup plan that has been documented as follows:

a) Full backups every Saturday night
b) Cumulative backups every Wednesday to back up everything since the last full backup
C) Daily incremental backups for the rest of the days of the week

The database has to be available 24 hours a day, 7 days a week, and can therefore not be shutdown at any time.

Referring to the scenario above, how do you implement this backup plan using RMAN?

a.) 1. Saturdays - Level 0 Backups2. Wednesdays - Level 1 Backups3. Sundays, Mondays, Tuesdays, Thursdays, Fridays - Level 2 Backups
b.) 1. Saturdays - Level 1 Backups2. Wednesdays - Level 2 Backups3. Sundays, Mondays, Tuesdays, Thursdays, Fridays - Level 3 Backups
c.) 1. Saturdays - Level 2 Backups2. Wednesdays - Level 1 Backups3. Sundays, Mondays, Tuesdays, Thursdays, Fridays - Level 0 Backups
d.) 1. Saturdays - Cold Backups2. Wednesdays - Level 1 Backups3. Sundays, Mondays, Tuesdays, Thursdays, Fridays - Level 2 Backups
e.) 1. Saturdays - Level 0 Backups2. Wednesdays - Level 1 Backups3. Sundays, Mondays, Tuesdays, Thursdays, Fridays - Level 1 Backups


Rob has been asked to look into the execution plans from an application which is working very slowly.

Based on the scenario above, at which one of the following does Rob first look when performing SQL statement tuning?

a.) Outer joins
b.) Inner joins
c.) Index range scans
d.) Full table scans
e.) Hash joins

Which one of the following do you execute to ensure that the control file is automatically backed up using Recovery Manager?

a.) CONFIGURE CONTROLFILE AUTOBACKUP ON;
b.) CONFIGURE CONTROLFILE BACKUP AUTO;
c.) CONFIGURE CONTROL FILE AUTOBACKUP;
d.) CONFIGURE CONTROL BACKUP AUTO;
e.) CONFIGURE CONTROL AUTO BACKUP;

Which one of the following background processes coordinates the rebalancing of disks in Automatic Storage Management?

a.) J000
b.) ORB1
c.) RECO
d.) RBAL
e.) CJQ0

You want to have the same statistics in your TEST database as the ones in your PRODUCTION database.

Referring to the above, how do you accomplish the requirements stated above?

a.) Run the GATHER_SCHEMA_STATS on TEST and PRODUCTION at the same time.
b.) Run the GATHER_DATABASE_STATS on TEST and PRODUCTION at the same time.
c.) EXPORT statistics from PRODUCTION and IMPORT to TEST using the DBMS_STATS package procedures.
d.) RESTORE statistics from PRODUCTION to TEST using the DBMS_STATS package procedures.
e.) RESTORE the PRODUCTION tables and data on TEST.

1. In Oracle 10g, how is the scheduler implemented?

a.) As a separate daemon which runs under root privileges
b.) As a separate thread under the SMON daemon which wakes up at a pre-defined interval to check for job schedules
c.) As a collection of functions and procedures in the DBMS_SCHEDULER package
d.) As a collection of functions, procedures, and packages in the DBMS_JOB_SCHEDULER package
e.) As a command-line tool named "schedjob" which interfaces with the OS scheduler

2. Which one of the following is a valid DDL statement to create a Bigfile tablespace?

a.) CREATE BIGFILE TABLESPACE tbs_bf_fct_001DATAFILE 'tbs_bf_fct_001.dbf'SIZE 6G EXTENT MANAGEMENT DICTIONARY;
b.) CREATE BIGFILE TABLESPACE tbs_bf_fct_001DATAFILE 'tbs_bf_fct_001.dbf'SIZE 6G;
c.) CREATE BIGFILE TABLESPACE tbs_bf_fct_001DATAFILE 'tbs_bf_fct_001.dbf'SIZE 6G SEGMENT SPACE MANAGEMENT MANUAL;
d.) CREATE BIGFILE TABLESPACE tbs_bf_fct_001DATAFILE 'tbs_bf_fct_001.dbf'SIZE 6G SEGMENT SPACE MANAGEMENT MANUAL EXTENT MANAGEMENT DICTIONARY;
e.) CREATE BIGFILE TABLESPACE tbs_bf_fct_001DATAFILE 'tbs_bf_fct_001.dbf' SIZE 3GB, 'tbs_bf_fct_002.dbf' SIZE 3G;

3. Which one of the following do you query in order to access fine-grained audit information?

a.) DBA_FGA_AUDIT_INFO
b.) DBA_FGA_AUDIT_TRAIL
c.) DBA_FGA_AUDIT
d.) V$FGA_AUDIT_TRAIL
e.) V$FGA_AUDIT_INFO


4. Which one of the following describes the "Reset database to incarnation" command used by Recovery Manager?
a.) It performs point-in-time recovery when using Recovery Manager.
b.) It is used to undo the effect of a resetlogs operation by restoring backups of a prior incarnation of the database.
c.) It restores the database to a save point as defined by the version control number or incarnation number of the database.
d.) It restores the database to the initial state in which it was found when first backing it up via Recovery Manager.
e.) It performs a resynchronization of online redo logs to a given archive log system change number (SCN).


5.You have a multi-master replication setup. You need to add a DATE type column to the table GL_ENTRY.

Referring to the scenario above, which one of the following Oracle packages do you execute to ensure that this change is replicated to all the other sites?

a.) DBMS_REPCAT.ALTER_MASTER_OBJECT
b.) DBMS_REPCAT.EXECUTE_MASTER_DDL
c.) DBMS_REPCAT.ALTER_MASTER_DDL
d.) DBMS_REPCAT.REPLICATE_OBJECT_DDL
e.) DBMS_REPCAT.ALTER_MASTER_REPOBJECT



6. How many levels of CPU percentage can be specified when creating a resource plan directive for Database Resource Manager?
a.) 2
b.) 4
c.) 8
d.) 16
e.) 32


7. Which one of the following procedures do you use to analyze a single SQL statement?

a.) DBMS_ADVISOR.ADD_SQLWKLD_REF
b.) DBMS_ADVISOR.ADD_SQLWKLD_STATEMENT *
c.) DBMS_ADVISOR.QUICK_TUNE
d.) DBMS_ADVISOR.CREATE_TASK
e.) DBMS_ADVISOR.CREATE_SQLWKLD

8. Which one of the following components of Data Guard pulls archived redo logs from the primary site?

a.) MRP*
b.) FAL client
c.) FAL server
d.) ARCn
e.) LGWR


9.Which one of the following is a valid Data Availability Mode within Data Guard?

a.) Guaranteed protection mode
b.) Instant protection mode
c.) Rapid protection mode
d.) Maximum protection mode
e.) Delayed protection mode




10. Which one of the following do you access to find out what benefit you can expect to the system after a recommendation is carried out?

a.) DBA_ADVISOR
b.) DBA_ADVISOR_TASKS *
c.) DBA_ADVISOR_FINDINGS
d.) DBA_ADVISOR_RECOMMENDATIONS
e.) DBA_ADVISOR_LOG


11. Which one of the following is the default type of materialized view?

a.) Primary Key Materialized View*
b.) Object Materialized View
c.) ROWID Materialized View
d.) Complex Materialized View
e.) Hash Materialized View


12. One of your database developers is concerned about optimizer execution plan changes when other changes are made to the database.

Referring to the scenario above, how do you prevent this from occurring?

a.) Make no structural changes to the database.
b.) Use hints.
c.) Use optimizer plan stability.
d.) Set the optimizer mode to RULE.
e.) Set the optimizer mode to ALL_ROWS

Blog Archive