Intro to Oracle Database Architecture
In this section, you will learn about the Oracle Architectural components such as Log Writer, DB
Writer, etc (Background Processes), SGA, Buffer Cache, Shared Pool, etc (Memory Layout - Memory
Buffer), Datafiles, Controlfiles, etc (Physical Oracle Layout). All these components, running together
play an important part in the Oracle Architecture.
Oracle Memory Components
All components such as Shared Pool (Library Cache, Dictionary Cache), Buffer Cache, Online Redo
Log file, Large Pool, Java Pool as well as a few other items are referred to as the System Global Area
(SGA). And the place stores information like bind variable values, sort areas, cursor handling, etc for
a specific user is called Program Global Area (PGA). The PGA is used to store only real values in place
of bind variables for executing SQL statements. The combination of these two memories structure
while they are running is called Oracle Instance.
All components such as Log Writer (LGWR), DB Writer (DBWR), Checkpoint (CKPT), Recovery
Process (RECO), Lock Process (LCKn), Archive Process (ARCH), System Monitor (SMON), and
Program Monitor (PMON) are referred to as an Oracle Background processes.
All components such as Server Parameter File (SPFILE), Parameter File (PFILE the INIT.ORA file),
Controlfiles, Datafiles, Password File, Archives and Online Redo Log files are referred to as an Oracle
Database Components.
All these Oracle components running together allow users to read, write, and modify data in an
Oracle database.
Now, the following are brief job descriptions for above components.
Shared Pool
The Shared Pool contains the Library Cache and the Dictionary Cache as well as a few other items,
which are not in the scope of this section. The Library Cache holds all users’ SQL statements,
Functions, Procedures, and Packages. It stores parsed SQL statement with its execution plan for
reuse. The Dictionary Cache, sometimes also referred to as the Row Cache, holds the Oracle
repository data information such as tables, indexes, and columns definitions, usernames, passwords,
synonyms, views, procedures, functions, packages, and privileges information.
Buffer Cache
The Buffer Cache holds users’ data. Users query their data while they are in the Buffer Cache. If
user’s request is not in the Buffer Cache then server process has to bring it from disk. The smallest
unit in the buffer cache is an Oracle block. The buffer cache can be increased or decreased by
granule unit. The smallest Granule Unit is 4Meg if the SGA size is less than 128Meg and the smallest
Granule Unit become 16Meg is the SGA size is more than 128Meg.
Redo Log Buffer
The Redo Log Buffer holds users’ entries such as INSERT, UPDATE, DELETE, etc (DML) and CREATE
TABLE, DROP TABLE (DDL). The Redo Entries are information that will be used to reconstruct, or
redo, changes made to a database. The Log Writer writes the entries into the Online Redo Log files
when a COMMIT occurs, every 3 seconds, or when one third of the Redo Log Buffer is full. That will
guarantee a database recovery to a point of failure if an Oracle database failure occurred.
Large Pool
The Large Pool holds information about the Recovery Manager (RMAN) utility when RMAN is running.
If you use the Multi-threaded Server (MTS) process, you may allocate the Oracle Memory structure
such that you can get advantage of using Large Pool instead of the Shared Pool. Notice that when
you use dedicated servers, user session information is housed in the PGA.
The Multi-threaded Server process will be used when a user send his/her request by using a shared
server. A user’s request will be assigned to a dispatcher based on the availability of dispatchers.
Then the dispatcher will send or receive request from an assigned shared server.
When you are running dedicated servers then the session information can be stored inside the
process global area (PGA). The UGA is the user global area, which holds session-based information.
When you are running shared servers then the session information can be stored inside the user
global area (UGA) and when your session does some sorting, some of the memory allocated for
sorting - specifically the amount defined by parameter sort_area_retained_size - comes from the
SGA and the rest (up to sort_area_size) comes from the PGA (Snnn). This is because the
sort_area_retained_size may have to be held open as the pipeline to return results to the frontend,
so it has to be located where the session can find it again as the session migrate from server to
server. On the other hand, the sort_area_size is a complete throwaway, and by locating it in the
PGA, Oracle can make best use of available memory without soaking the SGA. To avoid sessions
grabbing too much memory in the SGA when running MTS/shared server, you can set the
private_sga value in the resource_limit for the user. This ensures that any particularly greedy
SQL that (for example) demands multiple allocations of sort_area_retained_size will crash rather
than flushing and exhausting the SGA.
Background Processes
Log Writer Background Process (LGWR)
The LGWR’s job is to write the redo user’s entries from the Redo Log Buffer when the buffer exceeds
one third of the Redo Log Buffer, every 3 seconds, or when a user executes the commit SQL
statement.
DB Writer Background Process (DBWR)
The DBWR’s job is to write all the blocks that were marked as dirty block to the Oracle database on
disks (datafiles) whenever the checkpoint process signals it. Notice that when the Online Redo Log
files are filled the checkpoint process will signals a DBWR to write all the dirty block into the Oracle
database.
Checkpoint Background Process (CKPT)
The Checkpoint signals DB writers to write all dirty blocks into the disk. The Checkpoint will occurred
either by a specific defined time, size of the Online Redo Log file used by DBA, or when an Online
Redo log file will be switched from on log file to another. The following are the parameters that will
be used by a DBA to adjust time or interval of how frequently its checkpoint should occur on its
database.
LOG_CHECKPOINT_TIMEOUT = 3600 # every one hour
LOG_CHECKPOINT_INTERVAL=1000 # number of Operating System blocks
Recovery Background Process (RECO)
The RECO will be used only if you have a distributed database. You use this process to recover a
database if a failure occurs due to physical server problem or communication problem.
Lock Background Process (LCKn)
The LCKn background process will be used if you have multiple instances accessing to only one
database. An example of that is a Parallel Server or a Real Application Clusters.
Archive Background Process (ARCH)
This background process archives the Online Redo Log file when you are manually or automatically
switching an Online Redo Log file. An example of manually switching is: ALTER SYSTEM SWITCH
LOGFILE or ALTER SYSTEM ARCHIVE LOG CURRENT.
System Monitor Background Process (SMON)
When you start your database, the SMON will make sure that all datafiles, controlfiles, and log files
are synchronized before opening a database. If they are no, it will perform an instance recovery. It
will check the last SCN that was performed against the datafiles. If it finds that there are
transactions that were not applied against the datafile, then it will recover or synchronize the datafile
from either the Online Redo Log files or the Archive Log files. The smaller Online Redo log files will
bring a faster database recovery.
Program Monitor Background Process (PMON)
A user may be disconnected either by canceling its session or by communication link. In either act,
the PMON will start and perform an action to clean the reminding memory allocation that was
assigned to the user.
Physical Database Components
Parameter File (PFILE - INIT.ORA)
You can read or change this file. The file contains all Oracle parameters file to configure a database
instance. In this file, you can reset and change the Buffer Cache size, Shared Pool size, Redo Log
Buffer size, etc. You use this file to increase or decrease the size of System Global Area (SGA). You
also can change the location of your control files, mode of a database such as archivelog mode or
noarchivelog mode, and many other parameter options that you will learn them in the course of this
book.
Server Parameter File (SPFILE)
This file is in binary format and you cannot read this file. You should create the Server Parameter file
(CREATE SPFILE FROM PFILE) and startup your database using the spfile file, if you want to change
database parameters dynamically. There are some few parameters that you still need to shutdown
and startup the database, if you want to make the parameter in effect. You will learn all about these
parameters in the course of this book.
Controlfiles
You cannot read this file and it is in a binary format. If you want to see the content of control file or
the layout of your database, you should use the ALTER DATABASE BACKUP CONTROLFILE TO
TRACE statement. It writes a trace file into the %ORACLE_BASE
information the structure of your database layout, database name, last System Change Number
(SCN) number, your database mode (archivelog mode or noarchivelog mode), maximum number of
log files, maximum number of log members, maximum number of instances, maximum of number of
datafiles, the location of the database Online Redo Log files, and backup information.
Datafiles
All the Oracle data information will be stored in the Oracle datafiles. A datafile is one of the physical
layout components of a database. A tablespace (logical database layout) contains one or more
datafiles (physical database layout). You may have one or more extents in a datafile. An extent is a
collection of blocks. A block is a smallest unit in an Oracle. A tablespace is a collection of segments.
Think of a segment like an object in an Oracle database. A Segment is a collection of Oracle blocks.
Password file
To use the password file you should set the REMOTE_LOGIN_PASSWORD parameter to exclusive or
shared mode in the Parameter File (Example: REMOTE_LOGIN_PASSWORD=EXCLUSIVE). A
password file is an external Oracle file and to create it you should run the ORAPWD utility from
operating system.
For example:
MS-DOS> ORAPWD FILE=%ORACLE_HOME\b0 orapw
ENTRIES=3
The ENTRIES parameter specifying the number of user entries allowed for the password file. Now,
the DBA can be connected to the database as a user with sysdba privileges as shown here:
SQL> connect sys as sysdba
Password:
Online Redo Log files
The Online Redo Log files hold the Redo Entries. You should have at least two or more Redo Log
Groups. Each group may have more than one member. It is a good practice to multiplex Online Redo
Log members. The Redo Entries are information that will be used to reconstruct, or redo, changes
made to a database. The Log Writer writes the entries into the Online Redo Log files when a COMMIT
occurs, every 3 seconds, or when one third of the Redo Log Buffer is full. That will guarantee a
database recovery to a point of failure if an Oracle database failure occurred.
Archive Online Redo Log files
When an Online Redo Log File fills out, the checkpoint will force DBWR to write into the Oracle
datafiles and also the archive process copies the log file to an archive destination directory. That will
guarantee a database recovery to a point of failure if an Oracle database failure occurred.
Scenario:
Assuming that you have a user is updating a record from her SQLPLUS. The following SQL statement
is her SQL transaction:
SQL> UPDATE emp SET sal = 1000 WHERE empno = 100;
SQL> COMMIT;
Let us see what would be happen when oracle process it.
Step 1:
The user will type the above SQL statement and press enter key. This user either is connect to the
database by dedicated server or shared server (MTS). If the user is using multi-threaded servers
then her request will be given to a dispatcher and the dispatcher will give the request to shared
server. If the user is using dedicated server then the dedicated server will be all hers. Now, her user
process is talking to shared or dedicated server.
Step 2:
Now, the user’s SQL statement will be parsed and assigned an executed plan to be compiled in the
Library Cache in the Shared Pool. In order the SQL statement be compiled, Oracle need to make sure
its table and columns are valid and the user did not violated any security information. It goes to the
Dictionary Cache known as Raw Cache to get all necessary information about the table. If there was
no syntax problem and its table and columns were valid, then the SQL statement will be parsed
successfully and the execution plan will be perform.
Step 3:
Now, there is no problem. The Server process fetches the record. If the data or record is in the
Buffer Cache then an update process will be applied to it and the block will be marked as dirty block.
Notice that before the user save the update, the before block images are in the UNDO segment.
When the user executes commit statement or more than one third of the Redo Log buffer have filled
out, then LGWR writes the user’s entries from the redo log buffer to the Online Redo Log files. Still
the block may not been stored in the database.
In the case that the record is not in the buffer cache, the server process read the block containing
the record from the datafile (disk) and places it into the buffer cache.
Step 4:
Now, the checkpoint process will be activated based on the LOG_CHECKPOINT_INTERVAL,
LOG_CHECKPOINT_TIMEOUT parameters, or may be due to a log switch. This action will force DBWR
or CKPT to write all dirty block in the database (datafile).
Why tuning and what is Granule unit
Introduction
As a DBA you, are also responsible for detecting performance problems of your organization’s
database. You need to know how to start investigating a performance problem and then solve it.
Your job’s responsibilities dictate that you should at least be informed of the following basic
fundamental subjects:
Hands-on
In this exercise you will learn about the GRANULE unit, and how to perform performance tuning.
Performance Tuning Steps
When your clients complain about application performance, you look at the problem with the
following sequence.
1- SQL Statement tuning,
2- Optimizing sorting Operations,
3- Memory Allocation.
a- Operating System Memory size,
b- Oracle allocated Memory size (SGA-System Global Area),
4- I/O contentions,
5- Latches & Locks,
6- Network Load.
Granule Unit
Remember that the SGA components are allocated and de-allocated in units of contiguous memory
called Granule. So it is very important that the amount of allocated memory must be a product of the
Granule size and an integer.
If the SGA is less than 128MB, then a granule is 4MB. If the SGA is larger than 128MB, then a
granule is 16MB.
The minimum number of granules allocated at startup is: 1 for the buffer cache, 1 for the shared
pool, and 1 for the fixed SGA, which includes redo buffers.
Sizing the Shared Pool
Introduction
As a DBA, you are responsible for monitoring and changing the memory space allocation for the
Shared Pool in the SGA memory in case of performance problems. Your job’s responsibilities dictate
that you should at least be informed of the following basic fundamental subjects:
Resizing the Shared Pool Memory allocation
Keeping an object in the Shared Pool memory
Displaying the Library Cache Hit Ratio
Displaying the objects from the Shared Pool memory
Removing an object from the Shared Pool memory
Calculating the Dictionary Cache Hit Ratio
The Major components of the Shared Pool memory
The Library Cache
The Dictionary Cache
The User Global Area
Using the SHARED_POOL_RESERVED_SIZE parameter
Commands:
SHOW PARAMETER
SET SERVEROUTPUT ON
EXECUTE SYS.DBMS_SHARED_POOL.SIZES(150);
EXECUTE SYS.DBMS_SHARED_POOL.KEEP('SYS.STANDARD');
EXECUTE SYS.DBMS_SHARED_POOL.UNKEEP('SYS.STANDARD');
Hands-on
In this exercise we will learn how to: re-size the Shared Pool, get the Library Cache Hit Ratio, get the
objects from the Shared Pool, KEEP or UNKEEP an object in the Shared Pool, and calculate the
Dictionary Cache Hit Ratio. Also, we learn about the major components of the Shared Pool such as
the Library Cache, the Dictionary Cache, and the User Global Area.
Now, connect to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
The major components of the shared pool are: the library cache, the dictionary cache (row cache),
and the User Global Area (UGA).
View shared pool information
Let's first show the size of the shared pool memory.
SQL> SHOW PARAMETER shared_pool
Notice that the SHARED_POOL_RESERVED_SIZE parameter can be reserved for large objects.
Library Cache Hit Ratio
Let's measure and tune the Library Cache Hit Ratio. Get the overall library cache hit ratio.
SQL> SELECT
SUM(PINS-RELOADS)/SUM(PINS)*100
as "Library Cache Hit Ratio"
FROM v$librarycache
/
Assuming that the instance has been up and running for a while, the hit ratio must be above 99
percent. If the hit ratio is very high it means that the objects have not been aged out of the cache.
Details Library Cache Hit Ratio
Query more details about the hit ratio from each library cache item.
SQL> SELECT namespace, gethitratio
FROM v$librarycache
WHERE gethitratio > 0
/
Any ratio below 99, indicates the objects have been aged out of the cache at an unacceptable rate. If
the ratio is very low it also is possible that there were either a minimal or no requests. In this case it
is normal to have a low ratio.
View Cached objects
Show the count for each object type that was cached more than 100 times.
SQL> SELECT type, count(*) as "Number of Objects"
FROM v$db_object_cache
GROUP BY type
HAVING count(*) > 100
/
View Large objects
Now, show any objects with sizes larger than 150K.
SQL> SET SERVEROUTPUT ON
SQL> EXECUTE SYS.DBMS_SHARED_POOL.SIZES(150);
Cache a PL/SQL procedures
Use the KEEP procedure to pin the STANDARD package.
SQL> BEGIN
SYS.DBMS_SHARED_POOL.KEEP('SYS.STANDARD');
END;
/
Now your object stays in the memory.
Note that the DBMS_SHARD_POOL package contains the stored procedures (functions and/or
procedures) that provide ease of use for the developers to manipulate size of the shared pool,
allocate objects in the shared pool, etc.
Change status of an object
Use the UNKEEP procedure to change the status of the pinned object.
SQL> BEGIN
SYS.DBMS_SHARED_POOL.UNKEEP('SYS.STANDARD');
END;
/
Now your object will not stay in the memory.
Dictionary Cache Hit Ratio
Measure and tune the Dictionary Cache Hit Ratio. Query the V$ROWCACHE view to determine the hit
ratio for each item in the dictionary cache.
SQL> SELECT parameter, gets, getmisses,
100*(gets-getmisses)/(gets) hit_ratio, modifications
FROM v$rowcache WHERE gets> 0
/
The hit ratio must be above 98 percent. The low hit ratios indicate that the objects have not been
loaded into the cache yet.
Overall Dictionary Cache Hit Ratio
Calculate the overall dictionary cache hit ratio.
SQL> SELECT (SUM(gets - getmisses)) / SUM(gets)
AS "Dictionary Hit Ratio"
FROM v$rowcache
/
The hit ratio should be above 85 percent. If the ratio is below 85 percent, you may need to increase
the shared pool size.
View UGA statistics
When you are running dedicated servers then the session information can be stored inside the
process global area (PGA). The UGA is the user global area, which holds session-based information.
When you are running shared servers then the session information can be stored inside the user
global area (UGA) and when your session does some sorting, some of the memory allocated for
sorting - specifically the amount defined by parameter sort_area_retained_size - comes from the
SGA and the rest (up to sort_area_size) comes from the PGA (Snnn). This is because the
sort_area_retained_size may have to be held open as the pipeline to return results to the frontend,
so it has to be located where the session can find it again as the session migrate from server to
server. On the other hand, the sort_area_size is a complete throwaway, and by locating it in the
PGA, Oracle can make best use of available memory without soaking the SGA. To avoid sessions
grabbing too much memory in the SGA when running MTS/shared server, you can set the
private_sga value in the resource_limit for the user. This ensures that any particularly greedy
SQL that (for example) demands multiple allocations of sort_area_retained_size will crash rather
than flushing and exhausting the SGA.
Query the User Global Area (UGA) statistics.
SQL> SELECT name, SUM(value)
FROM v$sesstat a, v$statname b
WHERE a.statistic# = b.statistic#
AND UPPER(name) like '%UGA%'
GROUP BY name
/
In this query, you can not differentiate between the shared server and dedicated processes. It shows
the current memory consumptions and the maximum memory that each session has used.
Sizing the Buffer Cache
Introduction
As a DBA, you are responsible for monitoring and changing the memory space allocation for the
Buffer Cache in the SGA memory in case of performance problems. Your job’s responsibilities dictate
that you should at least be informed of the following basic fundamental subjects:
Resizing the BUFFER CACHE memory allocation
Setting the Dynamic Buffer Cache Advisory parameter
ON
OFF
READY
The Least Recently Used (LRU) list
The Dirty list
The DB Writer processes (DBWn)
Check the SGA memory maximum allocation size
Using the V$PARAMETER view
Checking the Buffer Cache size
Checking the Shared Pool size
Checking the Redo Log buffer size
Checking the JAVA Pool size
Decreasing the Shared Pool memory size
Increasing the Buffer Cache memory size
Commands:
SHOW PARAMETER
ALTER SYSTEM SET db_cache_advice=ON;
ALTER SYSTEM SET db_cache_size=80M
ALTER SYSTEM SET shared_pool_size=60M
ALTER SYSTEM SET db_cache_size=80M
Hands-on
In this exercise we will learn how to: re-size the BUFFER CACHE memory allocation, and use the
Dynamic Buffer Cache Advisory parameter. We will learn what the Least Recently Used (LRU) list,
and the Dirty lists are.
Connect to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
Buffers
The buffers in the buffer cache are organized in two lists:
1 -- The Least Recently Used (LRU) list, and
2 -- the Dirty list.
Dirty Buffers
The dirty buffers are blocks in the buffer cache that have been changed. The dirty buffers are moved
to the dirty list and written to data files by DB Writer processes (DBWn). When a free buffer is found,
the data block is read from the disk into the free buffer and the buffer is moved higher in the LRU
list. The Least Recently Used (LRU) list is a list of blocks that have been used at the least amount of
time recently at the SGA memory.
When you first configure an Oracle instance, you may set the buffer size value too high or too low.
That will result in too much I/O or poorly utilized real memory. To assist you in the proper
configuration, Oracle provided you with the DB_CACHE_ADVICE parameter. When this parameter
sets to ON, Oracle begins collecting statistics about cache utilization and projects the physical I/O for
20 cache sizes, ranging from 10 to 200 percent of the current size.
Using Buffer Cache Advisory
Now, let's turn on the Dynamic Buffer Cache Advisory parameter.
SQL> ALTER SYSTEM SET db_cache_advice=ON;
Notice that there are three possible values:
1 -- ON - allocates memory and gathers statistics.
2 -- OFF - disables advice statistic gathering.
3 -- READY - allocates memory, but statistics are not gathered.
Determine a potential Buffer Cache problem
Query the V$DB_CACHE_ADVICE view to determine potential physical I/O that would result from
using a different sized buffer cache.
SQL> SELECT size_for_estimate "Estimated Cache size (Mb)",
buffers_for_estimate "Buffers",
estd_physical_reads "Estimated Reads"
FROM v$db_cache_advice
ORDER BY 1
/
Reading V$DB_CACHE_ADVICE view
From the preceding list, we can see that increasing the buffer cache from 53 to 61 Megbytes does
not reduce the "estimated reads" column. Therefore, the cache buffer size of 55 MB (any thing
between 53 and 61) is the best candidate for database configuration instead of 77 Megabytes.
Notice, that the first entry is 10 percent of the current buffer size which is about 77 Megabytes.
Estimated Cache size (Mb) Buffers Estimated Reads
------------------------- ---------- ---------------
7.6836 1967 350,769,900
15.3672 3934 237,452,764
23.0508 5901 132,658,845
30.7344 7868 104,758,765
38.418 9835 96,765,231
46.1016 11802 74,765,034
53.7852 13769 64,980,630
61.4688 15736 987,902
69.1523 17703 4727
76.8359 19670 4727
84.5195 21637 4727
92.2031 23604 4727
99.8867 25571 4727
107.5703 27538 4727
115.2539 29505 4727
122.9375 31472 4727
130.6211 33439 4727
138.3047 35406 4727
145.9883 37373 4727
153.6719 39340 4727
20 rows selected.
To reduce the cache buffer size from 77 to 55 megbytes and increase the shared pool size from 60 to
80 Megabytes.
Resize Buffer Cache
First check the SGA memory maximum allocation size.
SQL> SHO PARAMETER sga_max_size
Now you should be able to see the maximum size that the SGA that can grow on.
Compare the SGA parameter size with the calculated size from the Shared Pool, Buffer Cache, and
Redo Log sizes along with Java pool.
SQL> SHOW PARAMETER sga_max_size
SQL> SELECT SUM(value) as "SGA Size"
FROM v$parameter
WHERE name in
('shared_pool_size','db_cache_size','log_buffer','java_pool_size')
/
Notice that if the sizes are very close, then we should decrease one size in order to increase the
others.
Check the Shared Pool, and Buffer Cache sizes individually.
SQL> SHOW PARAMETER shared_pool_size
SQL> SHOW PARAMETER db_cache_size
Take a note on the Shared Pool and DB Cache sizes.
Turn OFF the Dynamic Buffer Cache Advisory parameter.
SQL> ALTER SYSTEM SET db_cache_advice=OFF;
Then, decrease the Buffer Cache size to 55 Megabytes.
SQL> ALTER SYSTEM SET db_cache_size=55M
/
Increase the Shared Pool size to 80 Megabytes.
SQL> ALTER SYSTEM SET shared_pool_size=80M
/
Check the Shared Pool, and Buffer Cache sizes individually again.
SQL> SHOW PARAMETER shared_pool_size
SQL> SHOW PARAMETER db_cache_size
Notice that the shared pool size and db buffer cache size were adjusted based on the Granule unit.
That is the reason their adjusted sizes are more than their assigned sizes.
More on Resizing and Measuring the Buffer Cache
Hit Ratio
Introduction
As a DBA, you are responsible for monitoring and calculating the Buffer Cache Hit Ratio in the SGA
memory in case of performance problems. Your job’s responsibilities dictate that you should at least
be informed of the following basic fundamental subjects:
Measuring the Buffer Cache Hit Ratio
Creating a cache table
Calculating the Hit Ratio for Multiple Pools
Displaying the Hit Ratio for the KEEP buffer pool
Caching the Oracle objects in the Buffer Pools
Diagnosing the FREELIST contentions
Adding a FREELIST to a table
Using the DEFAULT pool
Using the KEEP pool
Using the RECYCLE pool
Using the V$SYSSTAT view
Using the V$BUFFER_POOL view
Using the V$BUFFER_POOL_STATISTICS dictionary view
Using the DBA_SEGMENTS view
Using the V$SESSION_WAIT view
Dropping a table
Commands:
ALTER SYSTEM SET db_cache_size=60m
ALTER SYSTEM SET db_keep_cache_size=16m
CREATE TABLE STORAGE (BUFFER_POOL KEEP)
SELECT /*+ CACHE (iself.dept) */
ALTER TABLE STORAGE (FREELISTS 2)
Hands-on
In this exercise you will learn how to: measure the Buffer Cache Hit Ratio, create a table to keep in
the KEEP buffer pool, calculate the Hit Ratio for multiple pools, cache the Oracle objects in the Buffer
Pools, diagnose the FREELIST contentions, and add a FREELIST to a table. You also learn what the
DEFAULT, KEEP, and RECYCLE pools are.
Begin by connecting to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
Buffer Cache Hit Ratio
Let's calculate the Buffer Cache Hit Ratio from the V$SYSSTAT view. The V$SYSSTAT view contains
the Oracle system usages such as session logical reads, physical reads direct, etc.
SQL> SELECT 1- ((p.value - l.value - d.value) / s.value)
AS "Buffer Cache Hit Ratio"
FROM v$sysstat s, v$sysstat l, v$sysstat d, v$sysstat p
WHERE s.name = 'session logical reads'
AND d.name = 'physical reads direct'
AND l.name = 'physical reads direct (lob)'
AND p.name = 'physical reads'
/
Note that if the Buffer Cache Hit Ratio is more than 90% then there is no problem. If the Buffer
Cache Hit Ratio is between 70% and 90% then there could be a problem. And if the Buffer Cache Hit
Ratio is less than 70%, there is definitely a problem and the Buffer Cache size needs to be increased.
In the above query, the ‘physical reads’ value is a number of read that Oracle physically performs
from hard disk including all the ‘physical reads direct’ and ‘physical read direct (lob).’ You want to be
sure that the ‘physical reads direct’ values be as high as possible in a respect to the ‘physical reads’
value. Also, you want to be sure that the ‘session logical reads’ value is very high. The ‘session
logical reads’ value is the number of times that Oracle reads a block from the memory (Buffer Cache)
rather than a disk.
Resize Buffer Cache
Let's first reduce the buffer cache size from 80 megabytes to 60 megabytes in order to add more
buffer pool to the memory.
SQL> ALTER SYSTEM SET db_cache_size=60m
/
Allocation KEEP buffer pool
Then, allocate memory space to the KEEP buffer pool.
SQL> ALTER SYSTEM SET db_keep_cache_size=16m
/
Using KEEP buffer pool
Now, you can create a table to be kept in the KEEP buffer pool.
SQL> CREATE TABLE iself.mykeep
(col1 NUMBER,
col2 VARCHAR2(10))
STORAGE (BUFFER_POOL KEEP)
/
Notice that if we don't specify a BUFFER_POOL, the DEFAULT pool is used.
The V$BUFFER_POOL view contains the Oracle buffer pools configuration. You can use this view to
query the buffer pool configurations information such as DEFAULT, KEEP, or RECYCLE pools.
Check how buffer pool was configured.
SQL> SELECT name, buffers
FROM v$buffer_pool
/
The name column values can be DEFAULT, KEEP, or RECYCLE.
DEFAULT buffer pool
The DEFAULT pool is the same thing as the standard block size Buffer Cache.
KEEP buffer pool
The KEEP buffer pool is used to keep buffers in the pool as long as possible for data blocks that are
likely to be reused.
RECYCLE buffer pool
The RECYCLE buffer pool is used as a temporary host block from segments that you don't want to
interfere with blocks in the DEFAULT Buffer Pool.
Buffer Cache Hit Ratio for multiple pools
Now, calculate the Hit Ratio for multiple pools using the V$BUFFER_POOL_STATISTICS dictionary
view.
SQL> SELECT name,
1-(physical_reads/(db_block_gets + consistent_gets)) "Hit Ratio"
FROM v$buffer_pool_statistics
WHERE db_block_gets + consistent_gets > 0
/
Notice that the Hit Ratio for the KEEP buffer pool is very high.
Cache an object
Now, cache the department table by hint in a SQL statement.
SQL> SELECT /*+ CACHE (iself.dept) */
*
FROM iself.dept
/
Now, the dept table should be in the memory.
Check FREELIST contention in Buffer Cache
The FREELIST space is an allocated space in a table that contains all the blocks’ references which are
candidate for more inserted records. Any contentions on the FREELIST allocation will create a
performance problem.
Now, let's diagnose the FREELIST contention in the Buffer Cache.
SQL> SELECT s.segment_name, s.segment_type,
s.FREELISTs, w.wait_time,
w.seconds_in_wait, w.state
FROM dba_segments s, v$session_wait w
WHERE w.event='buffer busy waits'
AND w.p1=s.header_file
AND w.p2=s.header_block
/
Note that there is no segment name. Normally that is what you get when you have no FREELIST
contention problem. If we find records, we should increase the number FREELIST on the table in the
question.
Note that the DBA_SEGMENTS view contains all the created users’ segments such as tables, indexes,
etc. The V$SESSION_WAIT view contains dynamic information for that instance and for that specific
time. Its content will be regenerated when you restart an instance. It contains the contentions
information such as ‘buffer busy waits’ for a file or a block, etc.
Increase FREELIST
If you identify a segment header that has a FREELIST contention, you can increase the number of
FREELISTs for the segment.
SQL> ALTER TABLE iself.dept
STORAGE (FREELISTS 2)
/
And you would not have any more FREELIST contentions.
Drop a table
Drop the iself.mykeep table.
SQL> DROP TABLE iself.mykeep
/
You drop the table so you can repeat this hands-on again if you wish.
Monitor and size the Redo Log buffer
Introduction
As a DBA, you are responsible to monitor and resize the Redo Log buffer in the SGA memory in case
of performance problems. Your job’s responsibilities dictate that you should at least be informed of
the following basic fundamental subjects:
Monitoring the Redo Log Buffer memory size
Re-sizing the Redo Log Buffer memory size
Checking the Redo allocation entries ratio
Checking waiting sessions
Checking for an Online Full Redo Log file
Using the V$SESSION_WAIT view
Hands-on
In this exercise you will learn how to: monitor and re-size the REDO LOG BUFFER, check the redo
buffer allocation entries, calculate the redo allocation buffer entries ratio, check sessions waiting for
log buffer space, and check for a full Online Redo Log file.
Connect to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
REDO log entries
The redo entries in the redo log files are used for database recovery. The buffer is usually flushed by
reaching: one third of the redo log buffer size, frequent COMMITs, and every 3 seconds.
Notice that if you have a fast processor and a slow disk, the Server Process may fill the redo log
buffer faster than the Log Writer can write the redo entries to the online Redo Log file, and you may
have to increase the size of the Redo Log file to avoid such a contention.
View REDO log buffer allocation entries
Keep your eyes on the redo buffer allocation entries.
SQL> SELECT name, value
FROM v$sysstat
WHERE name = 'redo buffer allocation entries'
/
Note that if you have a positive number, that means that you may have a problem. Be sure that you
have compared the above positive number with the Redo entries and it should not be more than 1%.
REDO log buffer entries HIT Ratio
Query the redo allocation buffer entries ratio. Be sure that your ratio is not more than 1%.
SQL> SELECT a.value/b.value "redo buffer entries ratio"
FROM v$sysstat a, v$sysstat b
WHERE a.name = 'redo buffer allocation entries'
AND b.name = 'redo entries'
/
If the number is greater than 1%, you should increase the size of the Redo Log buffer. You should
also check the checkpoint and size of the online redo log file.
Waiting for REDO log buffer
Check to see if any other sessions are waiting for log buffer space.
SQL> SELECT sid, event, seconds_in_wait, state
FROM v$session_wait
WHERE event = 'log buffer space'
/
If the Log Buffer space waits exist, consider increasing the size of the redo log. Also you should
check the speed of the disk that the Online Redo Log files are in.
REDO log space requests
Now, check to see if that Online Redo Log file is full and the server is waiting for the next Redo Log
file.
SQL> SELECT name, value
FROM v$sysstat
WHERE name = 'redo log space requests'
/
To resize the Online Log Buffer do the following:
SQL> ALTER SYSTEM SET log_buffer=###M scope=SPFILE; -- ### is a integer number that
you want to assign to your log buffer.
Database I/O problem
Introduction
As a DBA, you are responsible for monitoring and detecting I/O problems of your organization’s
database in case of performance problems. Your job’s responsibilities dictate that you should at least
be aware of the following basic fundamental subjects:
Monitoring the Database I/O problem
Monitoring the Checkpoint process activities
Tuning the Checkpoint process activities
Using the V$FILESTAT view
Using the V$SYSSTAT view
Using the V$SYSTEM_EVENT view
Using the DBA_DATA_FILES view
Setting the UNDO_MANAGEMENT parameter
Monitoring the SQL statement sorting
Distributing tablespaces on different disks
Using the Redundant Array of Inexpensive Disks (RAID)
Commands:
ALTER SYSTEM SET undo_management=AUTO SCOPE=spfile
Hands-on
In this exercise you will learn how to: detect, monitor and fix the database I/O problem, monitor and
tune the checkpoint process activities, set the undo_management to AUTO, and monitor the SQL
sorting on the disk space.
Connect to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
Avoid I/O contention
In order to reduce the I/O contention we should at least consider the following steps:
01 -- Use Oracle to distribute the data files across multiple disks evenly.
02 -- Use the Oracle partitioning.
03 -- Use the locally managed tablespace option, unless you have a reason not to do so.
04 -- Use only the Redo Log files, controlfiles, and dump files on the same disk.
05 -- Use all UNDO or ROLLBACK Segments on the same disk.
06 -- Use the Rollback and redo log files on a separate disk.
07 -- Use the data, index, SYSTEM, and UNDO tablespaces on a separate disk.
08 -- Use the data and temporary tablespaces on a separate disk.
09 -- Use the Redundant Array of Inexpensive Disks. The RAID (Redundant Array of Inexpensive
Disks) is some type of redundancy that you can build in your system a part from Oracle in order to
provided data duplication. You can use RAID supported by hardware or software application. It is
used in the case of a disk crash or failure. Multiple disks can be formatted in a RAID format such that
if one of them fail, when you replace the bad disk with a new disk then all its data will be
regenerated from other disks.
10 -- Use the raw device if possible.
Display I/O activities
Check the I/O transaction activity in the data files across multiple disks.
SQL> SELECT file_name, phyrds, phywrts
FROM v$filestat a, dba_data_files b
WHERE a.file# = b.file_id
/
Make sure to distribute the data files so that the average I/O volumes are as close as possible. It is
normally bad practice to have all datafiles in the same disk.
UNDO mode
Also, unless you have a good reason, make sure to set the automatic undo management mode to
AUTO.
SQL> ALTER SYSTEM SET undo_management=AUTO
SCOPE=spfile
/
When the system is in AUTO mode, and the transaction needs more space, Oracle automatically will
borrow more space from other undo segments that have extra space.
Database tables scan
Now, check to see how many times you have to scan the short and long tables.
SQL> SELECT name, value
FROM v$sysstat
WHERE name IN ('table scans (short tables)',
'table scans (long tables)')
/
Try to reduce the number by creating proper table indexes. Note that the count for 'Long Tables
scan' must be very small.
Checkpoint activites
Query the V$SYSSTAT directory view for the checkpoint process activity to monitor the checkpoint
process.
SQL> SELECT name, value
FROM v$sysstat
WHERE name like 'background check%'
/
If the "background check started" value is greater than the "background check completed" value,
you should increase the size of the REDO LOG files.
REDO log file parallel write
Query the V$SYSTEM_EVENT directory view to see the Log File Parallel Write Event to monitor and
tune a Redo Log file.
SQL> SELECT event, total_waits, time_waited
FROM v$system_event
WHERE event = 'log file parallel write'
/
The "Waits" column indicates a possible I/O problem.
Optimize Sort Operations
Introduction
As a DBA, you are responsible to optimize sort operations of your organization’s database in case of
performance problems. Your job’s responsibilities dictate that you should be aware of the following
basic fundamental subjects:
Monitoring a sort statement operation
Optimizing a sort statement operation
Using the V$SYSSTAT view
SORTS (DISK)
SORTS (MEMORY)
Calculating the sort Ratio
SORT_AREA_SIZE
PGA_AGGREGATE_TARGET
Increasing the SORT_AREA_SIZE parameter
Hands-on
In this exercise, you will learn how to monitor and optimize sort operations.
Now, connect to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
Sorting process
The Oracle Server Processes will sort as much as they can in the memory sort area before using any
disk sort space.
Sort performed destination (Memory vs. Disk)
Now, query the V$SYSSTAT view to track the number of in-memory and to-disk sorts, as well as the
total number of rows sorted.
SQL> SELECT name, value
FROM v$sysstat
WHERE name like 'sorts%'
/
Notice that the sorts (disk) number must be very low, and the sorts (memory) number can be as
high as possible.
The ‘sorts (memory)’ value is a number of times that Oracle tables were sorted in the memory. The
‘sorts (disk)’ value is a number of times that Oracle tables were sorted on the disk using the
TEMPORARY tablespace.
Sort HIT Ratio (Memory vs. Disk)
Now, determine the sort ratio of the in-memory vs. to-disk sorts.
SQL> SELECT 100*(a.value-b.value)/(a.value) AS "Sort Ratio"
FROM v$sysstat a, v$sysstat b
WHERE a.name = 'sorts (memory)'
AND b.name ='sorts (disk)'
/
Notice that the sort ratio should be greater than 95%. If you are not using the automatic PGA
memory and the number is less than 95 percent, you should greatly consider increasing the value of
the SORT_AREA_SIZE parameter. If you are using the automatic PGA memory and the number is
less than 95 percent, you should greatly consider increasing the value of the
PGA_AGGREGATE_TARGET parameter.
Increase sort area in memory
Try to increase the SORT_AREA_SIZE parameter to 819200 bytes.
SQL> ALTER SYSTEM SET sort_area_size=819200 SCOPE=spfile
/
In order for the change to take effect, the database needs to be restarted.
Shutdown and startup the database in order to see the changes.
SQL> SHUTDOWN IMMEDIATE;
SQL> CONNECT system/manager AS SYSDBA
SQL> STARTUP
Diagnosing Contention for Latches
Introduction
As a DBA, you are responsible for diagnosing any latch contentions in the Shared Pool area in case of
performance problems. Your job’s responsibilities dictate that you should at least be informed of the
following basic fundamental subjects:
Diagnosing contention for latches
Viewing the Shared Pool memory size
Viewing the Library Cache Hit Ratio
Viewing the Redo Allocation Latch ratio
Viewing the Redo Copy Latch wait ratio
Types of latch requests
Willing to wait request
Immediate Request
Using the V$LATCH dictionary view
Using the V$LATCHHOLDER view
Using the V$LATCH view
Using the V$LATCHNAME view
Calculating the Shared Pool Latch Hit Ratio
Calculating the Redo Allocation Latch ratio
Calculating the Copy Wait Ratio
Hands-on
In this exercise you will learn how to: diagnose contention for latches, query the Shared Pool and
Library Cache Hit Ratio, and query the Redo Allocation Latch and the Redo Copy Latch wait ratios.
You will learn about different types of latch requests.
Connect to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
You use the V$LATCH dictionary view to query all active latches that your database is currently
using, the V$LATCHNAME view to show a latch name with its associated latch number, and the
V$LATCHHOLDER view to query the latches that are waiting.
Latches:
You should think of a latch as a permission that can be given to one server process at a time.
Latches protect shared memory allocation, and also protect shared data structures in the SGA.
Oracle has two different types of latch requests: willing to wait or immediate.
Willing to wait request
In the willing to wait request, the process waits briefly and then goes to sleep. And then, it requests
the latch again.
Immediate request
In the immediate request, if the process cannot obtain the latch requested in the immediate mode, it
does not wait and does other jobs when it is finished, then it attempts to obtain the latches again.
Latches Hit Ratio
Query the Library Cache and Shared Pool request latches Hit Ratio from the V$LATCH dictionary
view.
SQL> SELECT name, (1-(misses/gets))*100
AS "Ratio", sleeps
FROM v$latch
WHERE name in ('library cache', 'shared pool')
/
The ratio must be above 99 percent. For example, if the Shared Pool latch Hit Ratio is less than 99
percent, it means that you have contention for the Shared Pool latch, and indicates that you may
need to tune the application. Or the application cursor cache may be too small, or the cursors may
have been closed too soon explicitly.
If there is a problem in the Library Cache Latch then it means that unshared SQL, reparsed sharable
SQL, and an undersized Library Cache contributed to the Library Cache Latch contention. Consider
using bind variables in the application, or increase the Shared Pool size.
Query the Redo Allocation Latch and the Redo Copy Latch wait ratios.
SQL> SELECT h.pid, n.name, (l.misses/l.gets)*100 wait_ratio
FROM v$latchholder h, v$latchname n, v$latch l
WHERE h.laddr = l.addr
AND l.latch# = n.latch#
AND n.name in ('redo allocation', 'redo copy')
/
Notice that if there was an output and the wait ratio was more than 1, there is a problem. Then, you
will need to increase the Redo Log Buffer size.
Tuning rollback or undo segment
Introduction
As a DBA, you are responsible for tuning rollback or undo segments in case of performance
problems. Your job’s responsibilities dictate that you should at least be informed of the following
basic fundamental subjects:
Tuning Rollback or UNDO Segments
Setting the UNDO_MANAGEMENT parameter
Setting the UNDO _RETENTION parameter
Setting the UNDO_TABLESPACE parameter
Calculate an estimate of UNDO spaces for
The UNDO retention requirement
Tuning the manual UNDO segments
Listing block contentions
Calculating the UNDO segment Hit Ratio
Using the V$UNDOSTAT view
Using the V$ROLLNAME view
Using the V$ROLLSTAT view
Using the V$WAITSTAT view
Using the V$SYSSTAT view
Checking the DB BLOCK GETS parameter
Checking the CONSISTENT GETS parameter
Checking the SHRINKS column
Checking the AVESHRINKS column
Hands-on
In this exercise you will learn how to: tune rollback or undo segments, set the UNDO_SEGMENT
parameter to AUTO, calculate an estimate of UNDO spaces to meet the UNDO retention requirement,
tune the MANUAL UNDO segments, list block contention, calculate the UNDO segment Hit Ratio, and
more.
So, let's connect to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
View UNDO parameters
First, check the UNDO_MANAGEMENT parameter.
SQL> SHOW PARAMETER undo
Always set the UNDO_MANAGEMENT parameter to AUTO. In the AUTO option, the database takes
control of how to manage the UNDO segments. The UNDO_RETENTION parameter indicates the
number of seconds that the database keeps the UNDO segments. The UNDO_TABLESPACE
parameter indicates the UNDO tablespace.
Spaces to meet UNDO retention
Use the V$UNDOSTAT view to calculate an estimate of undo spaces to meet the undo retention
requirement for 15 minutes.
SQL> SELECT (xx*(ups*overhead) + overhead) AS "Bytes"
FROM (SELECT value AS xx
FROM v$parameter WHERE name = 'undo_retention'),
(SELECT (SUM(undoblks)/SUM((end_time-begin_time)*86400))
AS ups
FROM v$undostat),
(SELECT value AS overhead
FROM v$parameter
WHERE name = 'db_block_size')
/
The result of this query shows how much UNDO space we need to meet the UNDO retention
requirement.
List all UNDO segments
Query the V$ROLLNAME dictionary view to list the UNDO segments.
SQL> SELECT * FROM v$rollname
/
Shrinking UNOD segments
Query the V$ROLLSTAT, and V$ROLLNAME dictionary views where the number of SHRINKS are more
than 1. This is only applicable if you are using UNDO segments manually.
SQL> SELECT a.name, b.extents, b.optsize, b.shrinks,
b.aveshrink, writes
FROM v$rollname a, v$rollstat b
WHERE a.usn = b.usn
AND b.shrinks > 1
/
The WRITES column indicates the number of bytes written in the rollback segment. Notice that if the
UNDO segment size is not big enough you may get the following error message: ORA-01555:
snapshot too old.
If the UNDO_MANAGEMENT is AUTO ignore the optimal size.
If it is MANUAL, then be sure that the UNDO segments have an optimal size.
If the SHRINKS value is HIGH and the AVESHRINKS value is HIGH then increase the Optimal size.
If the SHRINKS value is HIGH and the AVESHRINKS value is LOW then increase the Optimal size.
If the SHRINKS value is LOW and the AVESHRINKS value is LOW then decrease the Optimal size.
If the SHRINKS value is LOW and the AVESHRINKS value is HIGH then the Optimal size is okay.
Block contention statistics
Query the V$WAITSTAT view to list block contention statistics.
SQL> SELECT *
FROM v$waitstat
WHERE class LIKE '%undo%'
/
Note the UNDO header.
DB BLOCK GETS and CONSISTENT GETS values
Then, query the V$SYSSTAT to gather statistics for the DB BLOCK GETS and the CONSISTENT GETS
parameters.
SQL> COL name FORMAT a40
SQL> SELECT name, value
FROM v$sysstat
WHERE name in ('db block gets','consistent gets')
/
Note the DB BLOCK GETS, and CONSISTENT GETS values.
Calculate Cache Buffer Hit Ratio
Calculate the Hit Ratio from following formula.
Hit Ratio = (db block gets + consistent gets - undo header) /
(db block gets + consistent gets)
If the UNDO_MANAGEMENT parameter is MANUAL and the Hit Ratio is less than 99 you may have
problem with Rollback Segment contentions. You may have to increase the number of Rollback
Segments. This is not applicable when the UNDO_MANAGEMENT parameter is set to AUTO.
View UNOD activities
Query the V$SYSSTAT directory view to gather statistics for the Rollback and UNDO activities.
SQL> COL name FORMAT a60
SQL> SELECT name, value
FROM v$sysstat
WHERE name LIKE '%roll%'
OR name LIKE '%undo%'
/
Monitoring and Detecting Lock Contention
Introduction
As a DBA, you are responsible for monitoring and detecting a lock contention in case of a
performance problem. Your job’s responsibilities dictate that you should at least be informed of the
following basic fundamental subjects:
Monitoring and detecting a lock contention
Locking a table in the exclusive mode
Locking a table in the shared mode
Displaying locks contention
Killing a session
Using the V$LOCK view
Using the DBA_OBJECTS view
Using the V$LOCKED_OBJECT view
Using the V$SESSION view
The SID column
The SERIAL# column
Types of locks
TX-Transaction Enqueue
TM-DML Enqueue
UL- User Supplied
Commands:
LOCK TABLE IN EXCLUSIVE MODE
LOCK TABLE IN SHARE MODE
ALTER SYSTEM KILL SESSION
Hands-on
In this exercise you will learn how to monitor and detect a lock contention, lock a table exclusively,
query locks contention, and kill a session.
First, connect to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
Locks
The Oracle server automatically manages object locking, so most application developers don't need
to focus on lock management. You should avoid any lock contention. It does affect your performance
very significantly. The lock contention is a delay that Oracle is not able to lock a record or records
due to exclusive use of that object or objects. Oracle has user and system locks. Here we only take
about user locks. System locks are held for extremely short periods of time and will be control by
Oracle. Oracle has lots of system locks. The following are some of example of system locks.
BL Buffer hash table instance NA..NZ
Library cache pin instance
(A..Z = namespace)
CF
Control file schema global
enqueue
PF Password File
CI
Cross-instance function
invocation instance
PI, PS Parallel operation
CU Cursor bind PR Process startup
DF Data file instance QA..QZ
Row cache instance (A..Z
= cache)
DL
Direct loader parallel index
create
RT
Redo thread global
enqueue
DM
Mount/startup db
primary/secondary instance
SC
System commit number
instance
DR Distributed recovery process SM SMON
DX Distributed transaction entry SN
Sequence number
instance
FS File set SQ
Sequence number
enqueue
HW
Space management operations
on a specific segment
SS Sort segment
IN Instance number ST
Space transaction
enqueue
IR
Instance recovery serialization
global enqueue
SV Sequence number value
IS Instance state TA Generic enqueue
IV
Library cache invalidation
instance
TS
Temporary segment
enqueue (ID2=0)
JQ Job queue TS
New block allocation
enqueue (ID2=1)
KK Thread kick TT Temporary table enqueue
LA ..
LP
Library cache lock instance lock
(A..P = namespace)
UN User name
MM Mount definition global enqueue US Undo segment DDL
MR Media recovery WL
Being-written redo log
instance
Lock a table manually
You can lock a table manually. Lock the oracle.emp table exclusively.
SQL> LOCK TABLE oracle.emp IN EXCLUSIVE MODE
/
Now, the EMP table is in the exclusive mode. We strongly advise you not to lock any tables
exclusively unless you have to.
Lock a table by Oracle database
Lock the oracle.emp table in the shared mode.
SQL> LOCK TABLE oracle.emp IN SHARE MODE
/
Now, the oracle.emp table is in the shared mode.
Go to another session and login as the oracle user. Update an employee record.
SQL> UPDATE emp
SET sal = 1000
WHERE empno = 7788;
View locks
Come back to your previous session and query the V$LOCK view to show information about user
locked objects. Notice that the TYPE column can be TX, TM, and UL. If TYPE equals TX, it means
TRANSACTION ENQUEUE. If TYPE equals TM, it means DML ENQUEUE. If TYPE equals UL, it means
USER SUPPLIED.
SQL> SELECT o.owner, o.object_name, o.object_type, l.type
FROM dba_objects o, v$lock l
WHERE o.object_id = l.id1
AND o.owner = 'ORACLE'
/
Notice that the EMP table is in the EXCLUSIVE lock.
View locks by transactions
Query the V$LOCKED_OBJECT view to show locks acquired by transactions in the system.
SQL> SELECT oracle_username, object_id, session_id
FROM v$locked_object
/
Notice that the SYS user is locking the ORACLE user. Please take a note on its SESSION_ID column.
Check if the session has a locked transaction.
SQL> SELECT username, sid, serial#
FROM v$session
WHERE sid IN
(SELECT session_id FROM v$locked_object)
/
Note the SID and SERIAL# columns.
Resolve lock problem
Kill the insignificant session.
SQL> ALTER SYSTEM KILL SESSION '&SID,&SERIAL'
/
The session was killed.
Check locks
Query the V$LOCKED_OBJECT view again.
SQL> SELECT oracle_username, object_id, session_id
FROM v$locked_object
WHERE oracle_username <> 'SYS'
/
There should be no locked objects. Killing a session is one way to resolve lock contention.
Optimizing a SQL statement
Introduction
As a DBA, you are responsible for optimizing a SQL statement using the EXPLAIN PLAN statement in
case of performance problems. Your job’s responsibilities dictate that you should at least be informed
of the following basic fundamental subjects:
Using the EXPLAIN PLAN statement
Creating the PLAN_TABLE table
Submitting a SQL statement using the EXPLAIN PLAN statement
Using the SET STATEMENT_ID clause
Recalling the EXECUTION plan from the PLAN_TABLE table
Understanding of the following operations:
TABLE ACCESS FULL
TABLE ACCESS BY INDEX
INDEX UNIQUE SCAN
NESTED LOOPS
MERGE JOIN
FILTER
SORT AGGREGATE
Commands:
START %ORACLE_HOME%
EXPLAIN PLAN SET STATEMENT_ID=
Hands-on
In this exercise you will learn how to use the explain plan statement to determine how the optimizer
will execute the query in question.
First, let's connect to SQLPlus as the ORACLE user.
SQL> CONNECT oracle/learning
Check to see if the PLAN_TABLE exists in the user's schema.
SQL> DESC plan_table
Create PLAN_TABLE
In order to optimize a SQL statement, you execute the EXPLAIN PLAN statement to populate a list
plan of execution in PLAN_TABLE. Then you write a SQL statement against the table to query a plan
of execution list generated by EXPLANIN PLAN.
If PLAN_TABLE does not exist, run the utlxplan.sql script provided in the rdbmsfolder to create the
PLAN_TABLE table.
SQL> START %ORACLE_HOME%
Now, the PLAN_TABLE table was created.
Check the number of records in the table.
SQL> SELECT count(1)
FROM plan_table
/
There should be no records in the table.
Evaluate a SQL statement
Submit a query to the database using the EXPLAIN PLAN statement, so that the database will list the
plan of execution. Use the SET STATEMENT_ID clause to identify the plan for later review. You
should have one single unique statement_id for each specific SQL statement that you want to
optimize.
SQL> EXPLAIN PLAN
SET STATEMENT_ID='MY_FIRST_TEST'
INTO plan_table FOR
SELECT last_name, trade_date,
sum(shares_owned*current_price) portfolio_value
FROM customers, portfolio, stocks s
WHERE id = customer_id and stock_symbol = symbol
AND trade_date = (SELECT max(trade_date) FROM stocks
WHERE symbol = s.symbol)
GROUP BY last_name, trade_date;
Check the number of records in the table again.
SQL> SELECT count(1)
FROM plan_table
/
Now, there should be at least 13 records in the table.
Display the result of the SQL statement evaluation
Now, recall the execution plan from the PLAN_TABLE table.
SQL> SELECT id, parent_id,
lpad(' ', 2*(level-1)) || operation || ' ' ||
options || ' ' || object_name || ' ' ||
decode (id, 0, 'Cost = ' || position) "Query_Plan"
FROM plan_table
START WITH id = 0 and STATEMENT_ID = 'MY_FIRST_TEST'
CONNECT BY PRIOR ID = PARENT_ID
AND STATEMENT_ID = 'MY_FIRST_TEST'
/
How to read PLAN_TABLE
Now, assuming the following is an output of the above query, let's learn how to read the output
report.
The previous output report will be read this way. Notice that the PARENT_ID and ID columns show a
child and parent relationship.
ID PARENT_ID Query_Plan
--- ---------- ----------------------------------------------
0 SELECT STATEMENT Cost =
SORT GROUP BY
"SORT GROUP BY" means Oracle will perform a sort on the data obtained for the user.
1 0 SORT GROUP BY
FILTER
"FILTER" means that this is an operation that adds selectivity to a TABLE ACCESS FULL operation,
based on the contents of the where clause.
2 1 FILTER
NESTED LOOPS
"NESTED LOOPS" indicates that the join statement is occurring.
3 2 NESTED LOOPS
MERGE JOIN
"MERGE JOIN" indicates that the join statement is occurring.
4 3 MERGE JOIN
SORT JOIN
"SORT JOIN" indicates that the join statement is sorting. "TABLE ACCESS FULL" means that Oracle
will look at every row in the table (slowest way).
5 4 SORT JOIN
6 5 TABLE ACCESS FULL STOCKS
7 4 SORT JOIN
8 7 TABLE ACCESS FULL PORTFOLIO
TABLE ACCESS BY INDEX
"TABLE ACCESS BY INDEX" means that Oracle will use the ROWID method to find a row in the table.
It is very fast.
9 3 TABLE ACCESS BY INDEX ROWID CUSTOMERS
INDEX UNIQUE SCAN
"INDEX UNIQUE SCAN" means Oracle will use the primary or unique key. This is the most efficient
way to search an index.
10 9 INDEX UNIQUE SCAN SYS_C003126
SORT AGGREGATE
"SORT AGGREGATE" means Oracle will perform a sort on the data obtained for the user.
11 2 SORT AGGREGATE
12 11 TABLE ACCESS FULL STOCKS
Diagnostic and Tuning Tools
Introduction
Another one of your responsibilities as a DBA is to use and read the STATSPACK tool in a case of a
performance problem. Your job’s responsibilities dictate that you should be at least informed of the
following basic fundamental subjects:
Installing and Configuring the STATSPACK
Running the SPCREATE script
Creating user PERFSTAT
Connecting to the PERFSTAT user
Cleaning all the STATSPACK table
Producing a performance report
Taking SNAPSHOTS
Setting the TIMED_STATISTICS parameter
Understanding of the X$, and V$ tables
Using the V$FIXED_TABLE
Understanding the PERFSTAT scripts:
SPCREATE.SQL
SPTRUNC.SQL
SPREPORT.SQL
Reading the output PERFSTAT file sp_1_6
Commands:
SHO PARAMETER TIMED_STATISTICS
ALTER SYSTEM SET TIMED_STATISTICS=TRUE;
START %ORACLE_HOME%
START %ORACLE_HOME%
START %ORACLE_HOME%
EXECUTE statspack.snap;
Hands-on
In this exercise you will learn how to: install and configure the STATSPACK, create user PERFSTAT,
and take a snapshot. You also learn more about the X$ and V$ along with the TIMED_STATISTICS
parameter value and more.
So, let's connect to SQLPlus as the SYSTEM/MANAGER user.
SQL> CONNECT system/manager AS SYSDBA
Introduction
STATSPACK was created in response to a need for more relevant and more extensive statistical
reporting beyond what was available via UTLBSTAT/UTLESTAT reports. These statistics can be stored
permanently in the database so that historical data is available for comparison and diagnosis.
Before we install STATSPACK utility, let us revisit the content of our repository database.
View X$ and V$ tables
Query the number of X$, and V$ tables you have.
X$ tables
The X$ tables have cryptic names and should not be queried directly. You can use the
V$FIXED_TABLE view to query information about tables owned by the SYS user. Normally, they are
started with X$, X$_, V$, and V$_.
SQL> SELECT COUNT(1) FROM v$fixed_table
WHERE SUBSTR(name,1,2) = 'X$'
/
V$ views
The V$ views are based on the X$ tables, which are actually memory structures that are populated
at instance startup and cleared at instance shutdown.
SQL> SELECT COUNT(1) FROM v$fixed_table
WHERE SUBSTR(name,1,2) = 'V$'
/
View and Set TIMED_STATISTICS parameter
Check the TIMED_STATISTICS parameter value.
SQL> SHO PARAMETER TIMED_STATISTICS
Set the TIMED_STATISTICS parameter to TRUE to collect timing information in the V$ view.
SQL> ALTER SYSTEM SET TIMED_STATISTICS=TRUE;
Create PERFSTAT user and its objects
Create the PERFSTAT user with its objects. Make the TOOLS tablespace as its default tablespace and
the TEMP tablespace as its temporary tablespace. Then, run the following script. Make sure that you
have at least approximately 75 Megabytes of disk space for the installation.
SQL> START %ORACLE_HOME%
From this script the PERFSTAT user and its schema (lots of tables, views, synonyms, packages, etc)
will be created. You will get a list of online tablespaces in the database. You should decide which
tablespace you wish to create the STATSPACK tables and indexes. This will also be the PERFSTAT
user’s default tablespace. You should not specify the SYSTEM tablespace. Specifying the SYSTEM
tablespace will result in the installation FAILING, as using SYSTEM for performance data is not
supported. Specify the TOOLS tablespace for PERFSTAT user’s default tablespace or whatever you
wish.
Running the script, you will see some prompt messages to answer. The following are those
messages:
In the ‘Enter value for default_tablespace:’ prompt, type tools.
In the ‘Enter value for temprory_tablespace:’ prompt, type temp.
Now, the PERFSTAT schema should have been created.
Connect as the PERFSTAT user that was created from SPCREATE script.
SQL> CONNECT PERFSTAT/PERFSTAT@SCHOOL
Run STATSPACK utility
Now, clean all the STATSPACK tables.
SQL> START %ORACLE_HOME%
The STATSPACK tables will be truncated. It will remove all data from STATSPACK tables. You may
wish to export the data before continuing.
In the ‘Enter value for return:’ prompt, answer ‘NO’ if you don’t want to continue or press enter key
to continue. In this hands-on, press enter key to continue.
At the end, you should get a message indicating that all tables were truncated. After truncation, you
can make your snapshots. You can make as many snapshots you need. Make sure that your
snapshots are in the pick period and can address the performance problems.
Take a snapshot. In this example, we will take six snapshots.
SQL> EXECUTE statspack.snap;
SQL> EXECUTE statspack.snap;
SQL> EXECUTE statspack.snap;
SQL> EXECUTE statspack.snap;
SQL> EXECUTE statspack.snap;
SQL> EXECUTE statspack.snap;
Generate a report
Produce a performance report.
SQL> START %ORACLE_HOME%
When you run the SPREPORT script, you will be prompted with a list of all your snapshots. Its will tell
you that when a snapshot started, how many are there, its database name and instance name, etc.
You can specify the beginning and ending of snapshot. This will give you an option to generate a
report only on the period that you wish to be generated. Check the started time and then use the
snap ID next to it for references. In these hands-on exercises, you are going to select all the period.
In the ‘Enter value for begin_snap’ prompt, enter 1. In the ‘Enter value for end_snap:’ prompt, enter
6.
To specifying the report name, report will prompt to you a default report file name such as sp_1_6.
You can either to press
In the ‘Enter value for report_name’ prompt, press enter key.
Go to the MS explorer and open and read the output file sp_1_6.
Watch the following columns:
In the Shared pool statistics: The Pct Misses in the Library cache should be very low such as 2%. On
the other hand the Hit Ratio should be very high. Also, the Pct Misses in the Dictionary cache should
be low such as 15%.
In the Buffer Cache statistics: The Cache Hit% or Hit Ratio in the Buffer cache should be above 90%.
If it is 70 to 90 - might need tuning. If less than 70 - it needs tuning.
In the latches statistics: The Pct Misses in the latch activity should be very close to 0.
In the Rollback or UNDO segments statistics: the Rollback or UNDO segments statistics: The Pct
Waits in the Rollback segment stats should be very low. If the Pct Waits column is high then you may
need more UNDO segments
No comments:
Post a Comment