Oracle Data Pump is a new feature in Oracle Database 10g that enables very high-speed movement of data and metadata between databases. This technology is the basis for Oracle's new data movement utilities, Data Pump Export and Data Pump Import.
One very prominent feature of Data Pump is the ability to restart jobs. The ability to restart a Data Pump Export or Import job is extremely valuable to the DBA who might be responsible for moving large amounts of data, especially for big jobs that take a long time to complete. The Data Pump job is restarted with no data loss or corruption after an unexpected failure or after a STOP_JOB parameter is issued from the Import or Export interactive mode.
A very common reason to restart a Data Pump job is when a failure such as a power failure, an internal error, or an accidental instance bounce, prevents the job from succeeding. Typical reasons for failure might also be due to system resource issues, such as insufficient dump file space (in the Data Pump Export case), or insufficient Tablespace resources (in the Data Pump Import case). Upon Data Pump job failure, the DBA or user has the ability to intervene to correct a problem. A Data Pump restart command (START_JOB) can then be issued to continue the job from the point of failure.
This Technical Note describes Data Pump restart capability with two examples, using Data Pump Export and Import command line utilities, respectively. In both examples, it is necessary to define a directory object, DATA_PUMP_DIR, for the dump files. Furthermore, the Data Pump user, which in our examples is SYSTEM, needs to hold the exp_full_database and imp_full_database roles. Restart also works for unprivileged users. (See Oracle Database Utilities 10g Release 1 (10.1) for additional information about Data Pump and its use of directory objects.)
Example 1: Restart Data Pump Export
Our first example demonstrates how the restart capability can be used during a Data Pump Export. We will perform a Data Pump Export of the HR schema, specifying the maximum size of the dump file. Data Pump users typically specify the maximum dump file size (filesize) as a mechanism to manage on disk resources. In this example, our job will fail because the specified dump file size is too small.
Step 1: Start Export
In this example, we'll use the "expdp" client interface. An optional job_name has been specified on the command line, which may make it easier for you to find and attach to the job by name at a later time.
Here is the export command:
> expdp system/manager schemas=hr directory=data_pump_dir \
logfile=example1.log filesize=300000 dumpfile=example1.dmp \
job_name=EXAMPLE1
The output will look something like this:
Export: Release 10.1.0.2.0 - Production on Tuesday, 06 July, 2004 6:37
.
.
.
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
.
. . exported "HR"."COUNTRIES" 6.078 KB 25 rows
. . exported "HR"."DEPARTMENTS" 6.632 KB 27 rows
ORA-39095: Dump file space has been exhausted: Unable to allocate 217088 bytes
Job "SYSTEM"."EXAMPLE1" stopped due to fatal error at 06:38
>
Step 2: Attach to the Job
Our Export job (EXAMPLE1) has encountered a fatal error and the client has returned to the operating system prompt (>). We can examine the job state by invoking the following query:
SQL> select job_name,state from dba_datapump_jobs;
JOB_NAME STATE
------------------------------ ------------------------------
EXAMPLE1 NOT RUNNING
In this simple example, it's quite obvious what the problem is. The dump file we specified is too small for the HR schema. We can determine the reason for the error by looking at the client output that was displayed on our screen or the Data Pump log file.
To fix this problem, we need to add a second dump file. Let's attach to our job using the "EXAMPLE1" name. When we successfully attach to our job, the job status and other interesting information about the job is displayed.
>expdp system/manager attach=EXAMPLE1
Export: Release 10.1.0.2.0 - Production on Tuesday, 06 July, 2004 6:38
.
.
.
Job: EXAMPLE1
Owner: SYSTEM
Operation: EXPORT
.
.
.
Total Objects: 7
Worker Parallelism: 1
Step 3: Add a Dump File
At this juncture, a dump file can be added by issuing the ADD_FILE directive at the Export> prompt. The new dump file will automatically be created in the same directory as our original dump file (DATA_PUMP_DIR).
Export>add_file=hr1.dmp
We can next perform the status command and see that the additional dump file is now being displayed.
Export>status
Job: EXAMPLE1
Operation: EXPORT
Mode: SCHEMA
State: IDLING
Bytes Processed: 55,944
Percent Done: 99
Current Parallelism: 1
Job Error Count: 0
Dump File: /work1/private/oracle/rdbms/log/example1.dmp
size: 303,104
bytes written: 163,840
Dump File: /work1/private/oracle/rdbms/log/hr1.dmp
bytes written: 4,096
Step 4: Restart/Continue the Job
Finally, we issue the CONTINUE_CLIENT command. The job EXAMPLE1 will now resume.
Export>continue_client
Export> Job EXAMPLE1 has been reopened at Tuesday, 06 July, 2004 6:38
Restarting "SYSTEM"."EXAMPLE1": system/******** schemas=hr
directory=data_pump_dir logfile=example1.log filesize=300000
dumpfile=example1.dmp job_name=EXAMPLE1
Master table "SYSTEM"."EXAMPLE1" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.EXAMPLE1 is:
/work1/private/oracle/rdbms/log/example1.dmp
/work1/private/oracle/rdbms/log/hr1.dmp
Job "SYSTEM"."EXAMPLE1" completed with 1 error(s) at 06:38
We could have alternatively used the START_JOB command. The CONTINUE_CLIENT command changes the mode from interactive-command mode to logging mode and then does a START_JOB.
Example 2: Restart Data Pump Import—Resumable Wait Timeout
In Example 2, we will demonstrate Data Pump restart capability by doing a remap tablespace Import operation. Our Data Pump job will experience what is called a resumable wait. This wait is due to insufficient target tablespace resources. We show how the DBA can intervene by adding an additional file to the database and subsequently restart the import job.
Step 1: Create a New Tablespace
Our dump file contains various schemas that we would like imported into a new tablespace. We have our target database up and running. First, it is necessary for the DBA to create the new tablespace for our import schemas. We'll need to bring up SQL*Plus and perform the following command:
SQL> create tablespace example2
datafile '/work1/private/rdbms/dbs/example2.f'
size 1M extent management local
Step 2: Start the Import
Now that our target tablespace has been created, we are ready to perform the Data Pump Import job by using this command:
>impdp system/manager dumpfile=example2.dmp
remap_tablespace=system:example2 logfile=example2imp.log
job_name=example2
Import: Release 10.1.0.2.0 - Production on Tuesday, 06 July, 2004 6:54
.
.
.
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39171: Job is experiencing a resumable wait.
ORA-01658: unable to create INITIAL extent for segment in tablespace EXAMPLE2
Step 3: Stop the Job—Add a Tablespace File
Our Import job has entered the resumable wait state and is hung. This job will stay in a resumable wait until the job is stopped or until the resumable wait period expires, which by default is two hours. At this juncture, the DBA can intervene by adding an additional file to the EXAMPLE2 tablespace. One very good reason to stop the job is if the DBA has to do maintenance to the disk subsystem in conjunction with adding the second dump file. In the general case it may not be necessary to stop the job.
In our example, we will stop the job with a Control-C prior to the resumable wait expiration.
^C
Import>stop_job=immediate
Step 4: Add a File to the Tablespace
We can invoke SQL*Plus and add a file to the EXAMPLE2 tablespace.
SQL>alter tablespace example2 add datafile '/work1/private/rdbms/dbs/example2b.f'
size 1m autoextend on maxsize 50m;
Step 5: Attach to the Job
We are now ready to attach to our job and restart our import. Note that we attach to the job by job_name; in this case EXAMPLE2.
>impdp system/manager attach=example2
Import: Release 10.1.0.2.0 - Production on Tuesday, 6 July, 2004 07:01
Copyright (c) 2003, Oracle. All rights reserved.
.
.
.
Job Error Count: 0
Dump File: /work1/private/oracle/rdbms/log/example2.dmp
Worker 1 Status:
State: UNDEFINED
Object Schema: HR
Object Name: COUNTRIES
Object Type: SCHEMA_EXPORT/TABLE/TABLE
Completed Objects: 15
Worker Parallelism: 1
Step 6: Restart the Job
Now we can start the job again. This time, we'll use START_JOB.
Import> start_job
Step 7: Check the Job Status
We can optionally check the status of the job.
Import> status
Job: EXAMPLE2
Operation: IMPORT
Mode: SCHEMA
State: EXECUTING
Bytes Processed: 2,791,768
Percent Done: 99
Current Parallelism: 1
Job Error Count: 0
Dump File: /work1/private/oracle/rdbms/log/example2.dmp
Worker 1 Status:
State: EXECUTING
Object Type: SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Worker Parallelism: 1
When the job completes, you will be able to check the example2imp.log file for job status and other information.
In Example 2, we demonstrated how to restart a Data Pump import job. It's important to note that normally it would not be necessary to stop the job (in Step 3) in order to add the second dump file. We could have simply added the file to the tablespaces, from another session. In other words, we could have skipped over steps 3, 5,6,7, and 8. The job would have automatically resumed in this case.
Summary
If you use Data Pump and experience a failure, you may be able to easily correct the problem and then use the Data Pump restart capability without any loss of data, and without having to completely redo the operation.
Oracle 10 DBA Interview questions and answers
1. Is the following SQL statement syntactically correct? If not, please rewrite it correctly.
SELECT col1 FROM tableA WHERE NOT IN (SELECT col1 FROM tableB);
Ans. SQL is incorrect.
Correct SQL : SELECT col1 FROM tableA WHERE col1 NOT IN (SELECT col1 FROM tableB);
2. What is a more efficient way to write this query, to archive the same set?
Ans: SELECT col1 from tableA minus SELECT col1 from tableB
3.How would you determine that the new query is more efficient than the original query?
Ans: Run explain plan on both query and see the result .
4.How can we find the location of the database trace files from within the data dictionary?
Ans: Generally trace file on the database server machine is located in one of two locations:
1. If you are using a dedicated server connection, the trace file will be generated in the directory specified by
the USER_DUMP_DEST parameter.
2.If you are using a shared server connection, the trace file will be generated in the directory specified by the
BACKGROUND_DUMP_DEST parameter.
you can run sqlplus>SHOW PARAMETER DUMP_DEST
or
select name, value
from v$parameter
where name like '%dump_dest%'
5. What is the correct syntax for a UNIX endless WHILE loop?
while :
do
commands
done
6. Write the SQL statement that will return the name and size of the largest datafile in the database.
SQL> select name,bytes from v$datafile where bytes=(select max(bytes) from v$datafile);
7. What are the proper steps to changing the Oracle database block size?
cold backup all data files and backup controlfile to trace, recreate your database
with the new block size using the backup control file, and restore. It may be easier
with rman. You can not change datbase block size on fly.
8. Using awk, write a script to print the 3rd field of every line.
Ans:
awk '{print }'
awk '{print $3}
awk '{print $3}
9.Under what conditions, is a nested loop better than a merge join?
Ans:
Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving
condition.
It is important to have an index on column of inner join table as this table is probed every time for a new value
from outer table.
Optimizer may not use nested loop in case:
1. No of rows of both the table is quite high
2. Inner query always results in same set of records
3. The access path of inner table is independent of data coming from outer table.
merge join is used to join two independent data sources. They perform better than nested loop when the volume of
data is big in tables
but not as good as hash joins in general.
10.Which database views would you use to ascertain the number of commits a user's session has performed?
Joining V$SESSTAT ,V$STATNAME
select * from V$SESSTAT a ,V$STATNAME b where b.CLASS=a.STATISTIC# and b.NAME='user commits' and a.sid=
11.What does the #!bin/ksh at the beginning of a shell script do? Why should it be there?
Ans: On the first line of an interpreter script, the "#!", is the name of a program which should be used to
interpret the contents of the file.
For instance, if the first line contains "#! /bin/ksh", then the contents of the file are executed as a korn shell
script.
12.What command is used to find the status of Oracle 10g Clusterware (CRS) and the various components it manages
(ONS, VIP, listener, instances, etc.)?
Ans:
$ocrcheck
13.Describe a scenario in which a vendor clusterware is required, in addition to the Oracle 10g Clusterware.
If you choose external redundancy for the OCR and voting disk, then to enable redundancy, the disk subsystem must be configurable for RAID mirroring/vendor clusterware. Otherwise, your system may be vulnerable because the OCR and voting disk are single points of failure.
14.How would you find the interconnect IP address from any node within an Oracle 10g RAC configuration?
using oifcfg command.
se the oifcfg -help command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are
optional depending on the command, are:
*nodename—Name of the Oracle Clusterware node as listed in the output from the olsnodes command
*if_name—Name by which the interface is configured in the system
*subnet—Subnet address of the interface
*if_type—Type of interface: public or cluster_interconnect
You can use OIFCFG to list the interface names and the subnets of all of the interfaces available on the local node
by executing the iflist keyword as shown in this example:
oifcfg iflist
hme0 139.185.141.0
qfe0 204.152.65.16
You can also retrieve specific OIFCFG information with a getif command using the following syntax:
oifcfg getif [ [-global | -node nodename] [-if if_name[/subnet]] [-type if_type] ]
To store a new interface use the setif keyword. For example, to store the interface hme0, with the subnet
139.185.141.0, as a global interface (to be used as an interconnect for all of the RAC instances in your cluster),
you would use the command:
oifcfg setif -global hme0/139.185.141.0:cluster_interconnect
For a cluster interconnect that exists between only two nodes, for example rac1 and rac2, you could create the cms0
interface with the following commands, assuming 139.185.142.0 is the subnet addresses for the interconnect on rac1
and rac2 respectively:
oifcfg setif -global cms0/139.185.142.0:cluster_interconnect
Use the OIFCFG delif command to delete the stored configuration for global or node-specific interfaces. A specific
node-specific or global interface can be deleted by supplying the interface name, with an optional subnet, on the
command line. Without the -node or -global options, the delif keyword deletes either the given interface or all of
the global and node-specific interfaces on all of the nodes in the cluster. For example, the following command
deletes the global interface named qfe0 for the subnet 204.152.65.0:
oifcfg delif -global qfe0/204.152.65.0
On the other hand, the next command deletes all of the global interfaces stored with OIFCFG:
oifcfg delif -global
15.What is the Purpose of the voting disk in Oracle 10g Clusterware?
Voting disk record node membership information. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster.
16.What is the purpose of the OCR in Oracle 10g Clusterware?
Ans: Oracle Cluster Registry (OCR) is a component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster.
The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
17. In Oracle Streams archived log downstream capture, which database view can be used to determine which archived
logs are no longer needed by the capture process?
Ans: V$ARCHIVE_DEST_STATUS
SELECT col1 FROM tableA WHERE NOT IN (SELECT col1 FROM tableB);
Ans. SQL is incorrect.
Correct SQL : SELECT col1 FROM tableA WHERE col1 NOT IN (SELECT col1 FROM tableB);
2. What is a more efficient way to write this query, to archive the same set?
Ans: SELECT col1 from tableA minus SELECT col1 from tableB
3.How would you determine that the new query is more efficient than the original query?
Ans: Run explain plan on both query and see the result .
4.How can we find the location of the database trace files from within the data dictionary?
Ans: Generally trace file on the database server machine is located in one of two locations:
1. If you are using a dedicated server connection, the trace file will be generated in the directory specified by
the USER_DUMP_DEST parameter.
2.If you are using a shared server connection, the trace file will be generated in the directory specified by the
BACKGROUND_DUMP_DEST parameter.
you can run sqlplus>SHOW PARAMETER DUMP_DEST
or
select name, value
from v$parameter
where name like '%dump_dest%'
5. What is the correct syntax for a UNIX endless WHILE loop?
while :
do
commands
done
6. Write the SQL statement that will return the name and size of the largest datafile in the database.
SQL> select name,bytes from v$datafile where bytes=(select max(bytes) from v$datafile);
7. What are the proper steps to changing the Oracle database block size?
cold backup all data files and backup controlfile to trace, recreate your database
with the new block size using the backup control file, and restore. It may be easier
with rman. You can not change datbase block size on fly.
8. Using awk, write a script to print the 3rd field of every line.
Ans:
awk
awk '{print $3}
awk '{print $3}
9.Under what conditions, is a nested loop better than a merge join?
Ans:
Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving
condition.
It is important to have an index on column of inner join table as this table is probed every time for a new value
from outer table.
Optimizer may not use nested loop in case:
1. No of rows of both the table is quite high
2. Inner query always results in same set of records
3. The access path of inner table is independent of data coming from outer table.
merge join is used to join two independent data sources. They perform better than nested loop when the volume of
data is big in tables
but not as good as hash joins in general.
10.Which database views would you use to ascertain the number of commits a user's session has performed?
Joining V$SESSTAT ,V$STATNAME
select * from V$SESSTAT a ,V$STATNAME b where b.CLASS=a.STATISTIC# and b.NAME='user commits' and a.sid=
11.What does the #!bin/ksh at the beginning of a shell script do? Why should it be there?
Ans: On the first line of an interpreter script, the "#!", is the name of a program which should be used to
interpret the contents of the file.
For instance, if the first line contains "#! /bin/ksh", then the contents of the file are executed as a korn shell
script.
12.What command is used to find the status of Oracle 10g Clusterware (CRS) and the various components it manages
(ONS, VIP, listener, instances, etc.)?
Ans:
$ocrcheck
13.Describe a scenario in which a vendor clusterware is required, in addition to the Oracle 10g Clusterware.
If you choose external redundancy for the OCR and voting disk, then to enable redundancy, the disk subsystem must be configurable for RAID mirroring/vendor clusterware. Otherwise, your system may be vulnerable because the OCR and voting disk are single points of failure.
14.How would you find the interconnect IP address from any node within an Oracle 10g RAC configuration?
using oifcfg command.
se the oifcfg -help command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are
optional depending on the command, are:
*nodename—Name of the Oracle Clusterware node as listed in the output from the olsnodes command
*if_name—Name by which the interface is configured in the system
*subnet—Subnet address of the interface
*if_type—Type of interface: public or cluster_interconnect
You can use OIFCFG to list the interface names and the subnets of all of the interfaces available on the local node
by executing the iflist keyword as shown in this example:
oifcfg iflist
hme0 139.185.141.0
qfe0 204.152.65.16
You can also retrieve specific OIFCFG information with a getif command using the following syntax:
oifcfg getif [ [-global | -node nodename] [-if if_name[/subnet]] [-type if_type] ]
To store a new interface use the setif keyword. For example, to store the interface hme0, with the subnet
139.185.141.0, as a global interface (to be used as an interconnect for all of the RAC instances in your cluster),
you would use the command:
oifcfg setif -global hme0/139.185.141.0:cluster_interconnect
For a cluster interconnect that exists between only two nodes, for example rac1 and rac2, you could create the cms0
interface with the following commands, assuming 139.185.142.0 is the subnet addresses for the interconnect on rac1
and rac2 respectively:
oifcfg setif -global cms0/139.185.142.0:cluster_interconnect
Use the OIFCFG delif command to delete the stored configuration for global or node-specific interfaces. A specific
node-specific or global interface can be deleted by supplying the interface name, with an optional subnet, on the
command line. Without the -node or -global options, the delif keyword deletes either the given interface or all of
the global and node-specific interfaces on all of the nodes in the cluster. For example, the following command
deletes the global interface named qfe0 for the subnet 204.152.65.0:
oifcfg delif -global qfe0/204.152.65.0
On the other hand, the next command deletes all of the global interfaces stored with OIFCFG:
oifcfg delif -global
15.What is the Purpose of the voting disk in Oracle 10g Clusterware?
Voting disk record node membership information. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster. The voting disk must reside on a shared disk. For high availability, Oracle recommends that you have a minimum of three voting disks. If you configure a single voting disk, then you should use external mirroring to provide redundancy. You can have up to 32 voting disks in your cluster.
16.What is the purpose of the OCR in Oracle 10g Clusterware?
Ans: Oracle Cluster Registry (OCR) is a component in 10g RAC used to store the cluster configuration information. It is a shared disk component, typically located in a shared raw volume that must be accessible to all nodes in the cluster.
The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
17. In Oracle Streams archived log downstream capture, which database view can be used to determine which archived
logs are no longer needed by the capture process?
Ans: V$ARCHIVE_DEST_STATUS
Subscribe to:
Post Comments (Atom)
Blog Archive
-
▼
2008
(277)
-
▼
September
(261)
- Setting oracle database in archive mode from non-a...
- Oracle Version Information - How to CheckYour Curr...
- Database and Instance Shutdown
- Database and Instance Shutdown
- What Happens When You Open a Database
- How to Identify Your Oracle Database Software Release
- What is block change tracking in RMAN Oracle 10g
- What is Incremental Merge in RMAN 10g?
- Moving from DBMS_JOB to DBMS_SCHEDULER
- Moving from DBMS_JOB to DBMS_SCHEDULER
- Moving from DBMS_JOB to DBMS_SCHEDULER
- Moving from DBMS_JOB to DBMS_SCHEDULER
- Oracle Database 10g Scheduler - Associating Jobs w...
- Oracle Database 10g Scheduler - Creating Jobs With...
- Oracle Database 10g Scheduler
- DBMS_JOB Package
- Data Guard Database Synchronization Options
- Data Guard Role Management
- Data Guard Broker
- Data Guard Protection Modes
- Redo Apply and SQL Apply
- Data Guard Benefits
- Creating a Nested Materialized View: Example
- Creating a Fast Refreshable Materialized View: Exa...
- Automatic Refresh Times for Materialized Views: Ex...
- Periodic Refresh of Materialized Views: Example
- Creating Rowid Materialized Views: Example
- Creating Primary Key Materialized Views: Example
- Creating Subquery Materialized Views: Example
- Creating Prebuilt Materialized Views: Example
- Creating Materialized Join Views: Example
- Creating Materialized Aggregate Views: Example
- Privilages to create a refresh-on-commit materiali...
- Privilages to create a materialized view in anothe...
- What Privilages required to create a materialized ...
- What is materialized view?
- What is Optimizer Plan Stability?
- DBMS_ADVISOR Package
- What is Bigfile tablespace? why do we create it? w...
- Reverting a Table to its Previous Statistics
- Manipulating Statistics Using DBMS_STATS
- Copying Statistics Using DBMS_STATS
- Gathering statistics with DBMS_STAT Package
- Handling Errors During Backups: Example
- Backup Validation with RMAN
- Detection of Logical Block Corruption
- Detecting Physical and Logical Block Corruption
- Tests and Integrity Checks for Backups?
- When RMAN Performs Control File Autobackups
- How RMAN Performs Control File Autobackups?
- what command do you execute to ensure that the con...
- Which background processes coordinates the rebalan...
- Which background processes coordinates the rebalan...
- What is oracle ASM in Oracle 10g? How do we create...
- Load Balancing In Oracle 10g Real Application Clus...
- Oracle 10gR2 load balancing - Server side load bal...
- Oracle 10gR2 load balancing- Client-Side Connectio...
- Oracle 10gR2 load balancing - Part2 - Client-Side ...
- Oracle 10gR2 load balancing - Part1
- Oracle database tuning without any tools - using p...
- Oracle database tuning without any tools - using p...
- Oracle database tuning without any tools - using p...
- Oracle database tuning without any tools - using p...
- Quick DBA Test - Oracle 10g
- what are the major new featurs in Oracle 10g data ...
- Flashback version query in oracle 10g
- Optimizing Performance of Direct Path Loads
- Data Conversion During Direct Path Loads
- Data Loading Methods - SQL Loader
- How Does Data Pump Access Data?
- Oracle data pump 10g impdp help
- Oracle data pump 10g expdp help
- Exporting and Importing External tables using Data...
- Oracle 10g Data Pump APIs
- Difference between old exp/imp and new data pump e...
- Oracle Data pump 10g getting started
- Oracle Data Pump in Oracle Database 10g
- What is FLASHBACK TABLE ? Explain with examples?
- What is Oracle Secure Backup (OSB)?
- What is auxiliary instance? when it is created?
- Automatic Instance Creation for RMAN TSPITR
- Oracle 10 RMAN Views
- How to setup RMAN in Oracle 9i
- What is block pinging and why is it so bad?
- Can I test if a database is running in RAC mode?
- How does one stop and start RAC instances?
- How does one convert a single instance database to...
- How many OCR and voting disks should one have?
- Do you need special hardware to run RAC?
- Can any application be deployed on RAC?
- Oracle Real Application Clusters (RAC) FAQ - previ...
- Backup Scripts When Few Data Blocks Change
- Improving Incremental Backup Performance: Change T...
- Incrementally Updated Backups: A One Week Example
- Basic Incremental Backup Strategy
- Cumulative Incremental Backups
- Differential Incremental Backups
- Level 0 and Level 1 Incremental Backups
- RMAN Incremental Backups
- Using Compressed Backupsets
-
▼
September
(261)
No comments:
Post a Comment