Search This Blog

Wednesday, November 24, 2010

Basic UNIX Command For DBA’s



 


 


 


 


 


 


 


 

The following is a list of commonly used Unix command:


 

    ps - Show process

    grep - Search files for text patterns

    mailx - Read or send mail

    cat - Join files or display them

    cut - Select columns for display

    awk - Pattern-matching language

    df - Show free disk space


 

Here are some examples of how the DBA uses these commands:


 


 


 


 

    List available instances on a server:


 

$ ps -ef | grep smon

oracle 21832 1 0 Feb 24 ? 19:05 ora_smon_oradb1

oracle 898 1 0 Feb 15 ? 0:00 ora_smon_oradb2

dliu 25199 19038 0 10:48:57 pts/6 0:00 grep smon

oracle 27798 1 0 05:43:54 ? 0:00 ora_smon_oradb3

oracle 28781 1 0 Mar 03 ? 0:01 ora_smon_oradb4


 


 


 


 

    List available listeners on a server:


 

$ ps -ef | grep listener | grep -v grep

oracle 23879 1 0 Feb 24 ? 33:36 /8.1.7/bin/tnslsnr listener_db1 -inherit

oracle 27939 1 0 05:44:02 ? 0:00 /8.1.7/bin/tnslsnr listener_db2 -inherit

oracle 23536 1 0 Feb 12 ? 4:19 /8.1.7/bin/tnslsnr listener_db3 -inherit

oracle 28891 1 0 Mar 03 ? 0:01 /8.1.7/bin/tnslsnr listener_db4 -inherit


 


 


 


 

    Find out file system usage for Oracle

archive destination:

 



 


 

$ df -k | grep oraarch

/dev/vx/dsk/proddg/oraarch 71123968 4754872 65850768 7% /u09/oraarch


 


 


 


 

    List number of lines in the alert.log file:


 

$ cat alert.log | wc -l

2984


 


 


 


 

    List all Oracle error messages from the alert.log file:


 

$ grep ORA- alert.log

ORA-00600: internal error code, arguments: [kcrrrfswda.1], [], [], [], [], []

ORA-00600: internal error code, arguments: [1881], [25860496], [25857716], []


 


 


 


 


 

A crontab file is comprised of six fields:

 


 

Minute

Hour

Day of month

Month

Day of Week


 

0-59

0-23

1-31

1 - 12

0 - 6, with 0 = Sunday

 

Unix Command or Shell Scripts


 

    To edit a crontab file, type:


 

Crontab -e


 


 


 


 

    To view a crontab file, type:


 

Crontab -l

 

 



 


 

0 4 * * 5 /dba/admin/analyze_table.ksh

30 3 * * 3,6 /dba/admin/hotbackup.ksh /dev/null 2>&1


 


 


 


 

In the example above, the first entry shows that a script to analyze a

table runs every Friday at 4:00 a.m. The second entry shows that a

script to perform a hot backup runs every Wednesday and Saturday at

3:00 a.m.


 


 


 


 

The eight shell scripts provided below cover 90 percent of a DBA's

daily monitoring activities. You will need to modify the UNIX

environment variables as appropriate.


 

Check Oracle Instance Availability


 

The oratab file lists all the databases on a server:


 

$ cat /var/opt/oracle/oratab

###################################################################

## /var/opt/oracle/oratab ##

###################################################################

oradb1:/u01/app/oracle/product/8.1.7:Y

oradb2:/u01/app/oracle/product/8.1.7:Y

oradb3:/u01/app/oracle/product/8.1.7:N

oradb4:/u01/app/oracle/product/8.1.7:Y


 


 


 


 

The following script checks all the databases listed in the oratab file,

and finds out the status (up or down) of databases:


 

###################################################################

## ckinstance.ksh ##

###################################################################

ORATAB=/var/opt/oracle/oratab

echo "`date` "

echo "Oracle Database(s) Status `hostname` :\n"


 

db=`egrep -i ":Y|:N" $ORATAB | cut -d":" -f1 | grep -v "\#" | grep -v "\*"`

pslist="`ps -ef | grep pmon`"

for i in $db ; do

echo "$pslist" | grep "ora_pmon_$i" > /dev/null 2>$1

 


 


 


 

if (( $? )); then

echo "Oracle Instance - $i: Down"

else

echo "Oracle Instance - $i: Up"

fi

done


 


 


 


 

Use the following to make sure the script is executable:


 

$ chmod 744 ckinstance.ksh

$ ls -l ckinstance.ksh

-rwxr--r-- 1 oracle dba 657 Mar 5 22:59 ckinstance.ksh*


 

Here is an instance availability report:


 

$ ckinstance.ksh

Mon Mar 4 10:44:12 PST 2002


 

Oracle Database(s) Status for DBHOST server:

Oracle Instance - oradb1: Up

Oracle Instance - oradb2: Up

Oracle Instance - oradb3: Down

Oracle Instance - oradb4: Up


 

Check Oracle Listener's Availability


 

A similar script checks for the Oracle listener. If the listener is down,

the script will restart the listener:


 

#######################################################################

## cklsnr.sh ##

#######################################################################

#!/bin/ksh

DBALIST="primary.dba@company.com,another.dba@company.com";export DBALIST

cd /var/opt/oracle

rm -f lsnr.exist

ps -ef | grep mylsnr | grep -v grep > lsnr.exist

if [ -s lsnr.exist ]

then

echo

else

echo "Alert" | mailx -s "Listener 'mylsnr' on `hostname` is down" $DBALIST

TNS_ADMIN=/var/opt/oracle; export TNS_ADMIN

ORACLE_SID=db1; export ORACLE_SID

ORAENV_ASK=NO; export ORAENV_ASK

PATH=$PATH:/bin:/usr/local/bin; export PATH

. oraenv

LD_LIBRARY_PATH=${ORACLE_HOME}/lib;export LD_LIBRARY_PATH

 


 


 


 

lsnrctl start mylsnr

fi


 

Check Alert Logs (ORA-XXXXX)


 

Some of the environment variables used by each script can be put into

one profile:


 

#######################################################################

## oracle.profile ##

#######################################################################

EDITOR=vi;export EDITOR ORACLE_BASE=/u01/app/oracle; export

ORACLE_BASE ORACLE_HOME=$ORACLE_BASE/product/8.1.7; export

ORACLE_HOME LD_LIBRARY_PATH=$ORACLE_HOME/lib; export

LD_LIBRARY_PATH TNS_ADMIN=/var/opt/oracle;export

TNS_ADMIN NLS_LANG=american; export

NLS_LANG NLS_DATE_FORMAT='Mon DD YYYY HH24:MI:SS'; export

NLS_DATE_FORMAT ORATAB=/var/opt/oracle/oratab;export

ORATAB PATH=$PATH:$ORACLE_HOME:$ORACLE_HOME/bin:/usr/ccs/bin:/bin:/usr/bin:/usr/sbin:/

sbin:/usr/openwin/bin:/opt/bin:.; export

PATH DBALIST="primary.dba@company.com,another.dba@company.com";export

DBALIST


 

The following script first calls oracle.profile to set up all the

environment variables. The script also sends the DBA a warning e-mail

if it finds any Oracle errors:


 

####################################################################

## ckalertlog.sh ##

####################################################################

#!/bin/ksh

. /etc/oracle.profile

for SID in `cat $ORACLE_HOME/sidlist`

do

cd $ORACLE_BASE/admin/$SID/bdump

if [ -f alert_${SID}.log ]

then

mv alert_${SID}.log alert_work.log

touch alert_${SID}.log

cat alert_work.log >> alert_${SID}.hist

grep ORA- alert_work.log > alert.err

fi

if [ `cat alert.err|wc -l` -gt 0 ]

then

mailx -s "${SID} ORACLE ALERT ERRORS" $DBALIST < alert.err

fi

rm -f alert.err

rm -f alert_work.log

done


 

Clean Up Old Archived Logs

 


 


 


 

The following script cleans up old archive logs if the log file

system reaches 90-percent capacity:


 

$ df -k | grep arch

Filesystem kbytes used avail capacity Mounted on

/dev/vx/dsk/proddg/archive 71123968 30210248 40594232 43% /u08/archive


 

#######################################################################

## clean_arch.ksh ##

#######################################################################

#!/bin/ksh

df -k | grep arch > dfk.result

archive_filesystem=`awk -F" " '{ print $6 }' dfk.result`

archive_capacity=`awk -F" " '{ print $5 }' dfk.result`


 


 


 


 

if [[ $archive_capacity > 90% ] ]

then

echo "Filesystem ${archive_filesystem} is ${archive_capacity} filled"

# try one of the following option depend on your need

find $archive_filesystem -type f -mtime +2 -exec rm -r {} \;

tar

rman

fi


 

Analyze Tables and Indexes (for Better Performance)


 

Below, I have shown an example on how to pass parameters to a script:


 

####################################################################

## analyze_table.sh ##

####################################################################

#!/bin/ksh #

input parameter: 1: password # 2: SID if (($#<1)) then echo "Please enter

'oracle'

user password as the first parameter !" exit 0 fi if (($#<2)) then echo

"Please enter

instance name as the second parameter!" exit 0 fi


 

To execute the script with parameters, type:


 

$ analyze_table.sh manager oradb1


 


 


 


 

The first part of script generates a file analyze.sql, which contains

the syntax for analyzing table. The second part of script analyzes all

the tables:

 


 


 


 

#####################################################################

## analyze_table.sh ##

#####################################################################

sqlplus -s <<!

oracle/$1@$2

set heading off

set feed off

set pagesize 200

set linesize 100

spool analyze_table.sql

select 'ANALYZE TABLE ' | owner | '.' | segment_name |

' ESTIMATE STATISTICS SAMPLE 10 PERCENT;'

from dba_segments

where segment_type = 'TABLE'

and owner not in ('SYS', 'SYSTEM');

spool off

exit

!

sqlplus -s <<!

oracle/$1@$2

@./analyze_table.sql

exit

!


 


 


 


 

Here is an example of analyze.sql:


 

$ cat analyze.sql

ANALYZE TABLE HIRWIN.JANUSAGE_SUMMARY ESTIMATE STATISTICS SAMPLE 10 PERCENT;

ANALYZE TABLE HIRWIN.JANUSER_PROFILE ESTIMATE STATISTICS SAMPLE 10 PERCENT;

ANALYZE TABLE APPSSYS.HIST_SYSTEM_ACTIVITY ESTIMATE STATISTICS SAMPLE 10 PERCENT;

ANALYZE TABLE HTOMEH.QUEST_IM_VERSION ESTIMATE STATISTICS SAMPLE 10 PERCENT;

ANALYZE TABLE JSTENZEL.HIST_SYS_ACT_0615 ESTIMATE STATISTICS SAMPLE 10 PERCENT;

ANALYZE TABLE JSTENZEL.HISTORY_SYSTEM_0614 ESTIMATE STATISTICS SAMPLE 10 PERCENT;

ANALYZE TABLE JSTENZEL.CALC_SUMMARY3 ESTIMATE STATISTICS SAMPLE 10 PERCENT;

ANALYZE TABLE IMON.QUEST_IM_LOCK_TREE ESTIMATE STATISTICS SAMPLE 10 PERCENT;

ANALYZE TABLE APPSSYS.HIST_USAGE_SUMMARY ESTIMATE STATISTICS SAMPLE 10 PERCENT;

ANALYZE TABLE PATROL.P$LOCKCONFLICTTX ESTIMATE STATISTICS SAMPLE 10 PERCENT;


 

Check Tablespace Usage


 

This scripts checks for tablespace usage. If tablespace is 10 percent

free, it will send an alert e-mail.


 

#####################################################################

## ck_tbsp.sh ##

#####################################################################

#!/bin/ksh

sqlplus -s <<!

oracle/$1@$2

set feed off

 


 


 


 

set linesize 100

set pagesize 200

spool tablespace.alert

SELECT F.TABLESPACE_NAME,

TO_CHAR ((T.TOTAL_SPACE - F.FREE_SPACE),'999,999') "USED (MB)",

TO_CHAR (F.FREE_SPACE, '999,999') "FREE (MB)",

TO_CHAR (T.TOTAL_SPACE, '999,999') "TOTAL (MB)",

TO_CHAR ((ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)),'999')| ' %' PER_FREE

FROM (

SELECT TABLESPACE_NAME,

ROUND (SUM (BLOCKS*(SELECT VALUE/1024

FROM V\$PARAMETER

WHERE NAME = 'db_block_size')/1024)

) FREE_SPACE

FROM DBA_FREE_SPACE

GROUP BY TABLESPACE_NAME

) F,

(

SELECT TABLESPACE_NAME,

ROUND (SUM (BYTES/1048576)) TOTAL_SPACE

FROM DBA_DATA_FILES

GROUP BY TABLESPACE_NAME

) T

WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME

AND (ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)) < 10;

spool off

exit

!

if [ `cat tablespace.alert|wc -l` -gt 0 ]

then

cat tablespace.alert -l tablespace.alert > tablespace.tmp

mailx -s "TABLESPACE ALERT for ${2}" $DBALIST < tablespace.tmp

fi


 


 


 


 

An example of the alert mail output is as follows:


 

TABLESPACE_NAME USED (MB) FREE (MB) TOTAL (MB) PER_FREE

------------------- --------- ----------- ------------------- ------------------

SYSTEM 2,047 203 2,250 9 %

STBS01 302 25 327 8 %

STBS02 241 11 252 4 %

STBS03 233 19 252 8 %


 

Find Out Invalid Database Objects


 

The following finds out invalid database objects:


 

#####################################################################

## invalid_object_alert.sh ##

#####################################################################

#!/bin/ksh

 



 


 

. /etc/oracle.profile

sqlplus -s <<!

oracle/$1@$2

set feed off

set heading off

column object_name format a30

spool invalid_object.alert

SELECT OWNER, OBJECT_NAME, OBJECT_TYPE, STATUS

FROM DBA_OBJECTS

WHERE STATUS = 'INVALID'

ORDER BY OWNER, OBJECT_TYPE, OBJECT_NAME;

spool off

exit

!

if [ `cat invalid_object.alert|wc -l` -gt 0 ]

then

mailx -s "INVALID OBJECTS for ${2}" $DBALIST < invalid_object.alert

fi

$ cat invalid_object.alert


 

OWNER OBJECT_NAME OBJECT_TYPE STATUS

----------------------------------------------------------------------

HTOMEH DBMS_SHARED_POOL PACKAGE BODY INVALID

HTOMEH X_$KCBFWAIT VIEW INVALID

IMON IW_MON PACKAGE INVALID

IMON IW_MON PACKAGE BODY INVALID

IMON IW_ARCHIVED_LOG VIEW INVALID

IMON IW_FILESTAT VIEW INVALID

IMON IW_SQL_FULL_TEXT VIEW INVALID

IMON IW_SYSTEM_EVENT1 VIEW INVALID

IMON IW_SYSTEM_EVENT_CAT VIEW INVALID

LBAILEY CHECK_TABLESPACE_USAGE PROCEDURE INVALID

PATROL P$AUTO_EXTEND_TBSP VIEW INVALID

SYS DBMS_CRYPTO_TOOLKIT PACKAGE INVALID

SYS DBMS_CRYPTO_TOOLKIT PACKAGE BODY INVALID

SYS UPGRADE_SYSTEM_TYPES_TO_816 PROCEDURE INVALID

SYS AQ$_DEQUEUE_HISTORY_T TYPE INVALID

SYS HS_CLASS_CAPS VIEW INVALID

SYS HS_CLASS_DD VIEW INVALID


 


 


 


 

This script sends out an alert e-mail if dead lock occurs:


 

###################################################################

## deadlock_alert.sh ##

###################################################################

#!/bin/ksh

. /etc/oracle.profile

sqlplus -s <<!

oracle/$1@$2

set feed off

set heading off

spool deadlock.alert

 



 


 

SELECT SID, DECODE(BLOCK, 0, 'NO', 'YES' ) BLOCKER,

DECODE(REQUEST, 0, 'NO','YES' ) WAITER

FROM V$LOCK

WHERE REQUEST > 0 OR BLOCK > 0

ORDER BY block DESC;

spool off

exit

!

if [ `cat deadlock.alert|wc -l` -gt 0 ]

then

mailx -s "DEADLOCK ALERT for ${2}" $DBALIST < deadlock.alert

fi


 

Conclusion


 

0,20,40 7-17 * * 1-5 /dba/scripts/ckinstance.sh > /dev/null 2>&1

0,20,40 7-17 * * 1-5 /dba/scripts/cklsnr.sh > /dev/null 2>&1

0,20,40 7-17 * * 1-5 /dba/scripts/ckalertlog.sh > /dev/null 2>&1

30 * * * 0-6 /dba/scripts/clean_arch.sh > /dev/null 2>&1

* 5 * * 1,3 /dba/scripts/analyze_table.sh > /dev/null 2>&1

* 5 * * 0-6 /dba/scripts/ck_tbsp.sh > /dev/null 2>&1

* 5 * * 0-6 /dba/scripts/invalid_object_alert.sh > /dev/null 2>&1

0,20,40 7-17 * * 1-5 /dba/scripts/deadlock_alert.sh > /dev/null 2>&1


 

Now my DBA friends, you can have more uninterrupted sleep at night.

You may also have time for more important things such as performance

tuning.


 

References


 

Unix in a Nutshell, O'Reilly & Associates, Inc.;

"Using Oracle9i Application Server to Build Your Web-Based

Database Monitoring Tool," Daniel T. Liu; Select Magazine -

November 2001 Volume 8, No. 1;

"Net8: A Step-by-Step Setup of Oracle Names Server," Daniel T.

Liu; Oracle Open World 2000, Paper#271.


 

I would also like to acknowledge the assistance of Johnny Wedekind

of ADP, Ann Collins, Larry Bailey, Husam Tomeh and Archana Sharma

of FARES.

Tuesday, November 23, 2010

Oracle architecture




Click Here For Oracle Architecture & Enterprise manager.pdf

Oracle Architecture


New Term: The Oracle Relational Database Management System, or RDBMS, is designed to allow simultaneous access to large amounts of stored information. The RDBMS consists of the database (the information) and the instance (the embodiment of the system). The database contains the physical files that reside on the system and the logical pieces such as the database schema. These database files take various forms, as described in the following section. The instance is the method used to access the data and consists of processes and system memory.


NOTE: Object extensions have been added to the RDBMS with Oracle8. The object extension to tables is covered in detail on Day 12, "Working with Tables, Views, and Synonyms." Oracle refers to Oracle8 as an O-RDBMS (Object-Relational Database Management System). In this book, I refer to Oracle as an RDBMS for clarity.

The Database

The Oracle database has a logical layer and a physical layer. The physical layer consists of the files that reside on the disk; the components of the logical layer map the data to these physical components.

The Physical Layer

The physical layer of the database consists of three types of files:

  • One or more datafiles--Datafiles store the information contained in the database. You can have as few as one datafile or as many as hundreds of datafiles. The information for a single table can span many datafiles or many tables can share a set of datafiles. Spreading tablespaces over many datafiles can have a significant positive effect on performance. The number of datafiles that can be configured is limited by the Oracle parameter MAXDATAFILES.
  • Two or more redo log files--Redo log files hold information used for recovery in the event of a system failure. Redo log files, known as the redo log, store a log of all changes made to the database. This information is used in the event of a system failure to reapply changes that have been made and committed but that might not have been made to the datafiles. The redo log files must perform well and be protected against hardware failures (through software or hardware fault tolerance). If redo log information is lost, you cannot recover the system.
  • One or more control files--Control files contain information used to start an instance, such as the location of datafiles and redo log files; Oracle needs this information to start the database instance. Control files must be protected. Oracle provides a mechanism for storing multiple copies of control files.

The Logical Layer

The logical layer of the database consists of the following elements:

  • One or more tablespaces.
  • The database schema, which consists of items such as tables, clusters, indexes, views, stored procedures, database triggers, sequences, and so on.

Tablespaces and Datafiles

New Term: The database is divided into one or more logical pieces known as tablespaces. A tablespace is used to logically group data together. For example, you can create one tablespace for accounting and a separate tablespace for purchasing. Segmenting groups into different tablespaces simplifies the administration of these groups (see Figure 2.1). Tablespaces consist of one or more datafiles. By using more than one datafile per tablespace, you can spread data over many different disks to distribute the I/O load and improve performance.

The relationship between the database, tablespaces, and datafiles.

As part of the process of creating the database, Oracle automatically creates the SYSTEM tablespace for you. Although a small database can fit within the SYSTEM tablespace, it's recommended that you create a separate tablespace for user data. The SYSTEM tablespace is where the data dictionary is kept. The data dictionary contains information about tables, indexes, clusters, and so on.

Datafiles can be operating system files or, in the case of some operating systems, RAW devices. Datafiles and data access methods are described in detail on Day 12.

The Database Schema

New Term: The database schema is a collection of logical-structure objects, known as schema objects, that define how you see the database's data. These schema objects consist of structures such as tables, clusters, indexes, views, stored procedures, database triggers, and sequences.

  • Table--A table, which consists of a tablename and rows and columns of data, is the basic logical storage unit in the Oracle database. Columns are defined by name and data type. A table is stored within a tablespace; often, many tables share a tablespace.
  • Cluster--A cluster is a set of tables physically stored together as one table that shares a common column. If data in two or more tables is frequently retrieved together based on data in the common column, using a clustered table can be quite efficient. Tables can be accessed separately even though they are part of a clustered table. Because of the structure of the cluster, related data requires much less I/O overhead if accessed simultaneously.
  • Index--An index is a structure created to help retrieve data more quickly and efficiently (just as the index in this book allows you to find a particular section more quickly). An index is declared on a column or set of columns. Access to the table based on the value of the indexed column(s) (as in a WHERE clause) will use the index to locate the table data.

NOTE: A new feature in Oracle8 is the index-only table. In an index-only table, the data and index are stored together. This is discussed in detail on Day 13, "Using Indexes and Sequences."
  • View--A view is a window into one or more tables. A view does not store any data; it presents table data. A view can be queried, updated, and deleted as a table without restriction. Views are typically used to simplify the user's perception of data access by providing limited information from one table, or a set of information from several tables transparently. Views can also be used to prevent some data from being accessed by the user or to create a join from multiple tables.
  • Stored procedure--A stored procedure is a predefined SQL query that is stored in the data dictionary. Stored procedures are designed to allow more efficient queries. Using stored procedures, you can reduce the amount of information that must be passed to the RDBMS and thus reduce network traffic and improve performance.
  • Database trigger--A database trigger is a procedure that is run automatically when an event occurs. This procedure, which is defined by the administrator or developer, triggers, or is run whenever this event occurs. This procedure could be an insert, a deletion, or even a selection of data from a table.
  • Sequence--The Oracle sequence generator is used to automatically generate a unique sequence of numbers in cache. By using the sequence generator you can avoid the steps necessary to create this sequence on your own such as locking the record that has the last value of the sequence, generating a new value, and then unlocking the record.

Segments, Extents, and Data Blocks

Within Oracle, the space used to store data is controlled by the use of logical structures. These structures consist of the following:

  • Data blocks--A block is the smallest unit of storage in an Oracle database. The database block contains header information concerning the block itself as well as the data.
  • Extents--Extents consist of data blocks.
  • Segments--A segment is a set of extents used to store a particular type of data, as shown in Figure 2.2.

Segments, extents, and data blocks.

Segments

An Oracle database can use four types of segments:

  • Data segment--Stores user data within the database.
  • Index segment--Stores indexes.
  • Rollback segment--Stores rollback information used when data must be rolled back.
  • Temporary segment--Created when a SQL statement needs a temporary work area; these segments are destroyed when the SQL statement is finished. These segments are used during various database operations, such as sorts.

Extents

Extents are the building blocks of segments; in turn, they consist of data blocks. An extent is used to minimize the amount of wasted (empty) storage. As more and more data is entered into tablespaces in your database, the extents used to store that data can grow or shrink as necessary. In this manner, many tablespaces can share the same storage space without preallocating the divisions between those tablespaces.

At tablespace-creation time, you can specify the minimum number of extents to allocate as well as the number of extents to add at a time when that allocation has been used. This arrangement gives you efficient control over the space used in your database.

Data Blocks

Data blocks are the smallest pieces of an Oracle database; they are physically stored on disk. Although the data block in most systems is 2KB (2,048 bytes), you can change this size for efficiency depending on your application or operating system.


NOTE: Oracle blocks do not need to be, and may not be the same as, operating system data blocks. In fact, in most cases they are not.

The Oracle Instance

The Oracle instance consists of the Oracle processes and shared memory necessary to access information in the database. The instance is made up of the user processes, the Oracle background processes, and the shared memory used by these processes (see Figure 2.3).

The Oracle Memory Structure

New Term: Oracle uses shared memory for several purposes, including caching of data and indexes as well as storing shared program code. This shared memory is broken into various pieces, or memory structures. The basic memory structures associated with Oracle are the System Global Area (SGA) and the Program Global Area (PGA).

The Oracle instance.

The System Global Area (SGA)

The SGA is a shared memory region that Oracle uses to store data and control information for one Oracle instance. The SGA is allocated when the Oracle instance starts and deallocated when the Oracle instance shuts down. Each Oracle instance that starts has its own SGA. The information in the SGA consists of the following elements, each of which has a fixed size and is created at instance startup:

The database buffer cache--This stores the most recently used data blocks. These blocks can contain modified data that has not yet been written to disk (sometimes known as dirty blocks), blocks that have not been modified, or blocks that have been written to disk since modification (sometimes known as clean blocks). Because the buffer cache keeps blocks based on a most recently used algorithm, the most active buffers stay in memory to reduce I/O and improve performance.

  • The redo log buffer--This stores redo entries, or a log of changes made to the database. The redo log buffers are written to the redo log as quickly and efficiently as possible. Remember that the redo log is used for instance recovery in the event of a system failure.
  • The shared pool--This is the area of the SGA that stores shared memory structures such as shared SQL areas in the library cache and internal information in the data dictionary. The shared pool is important because an insufficient amount of memory allocated to the shared pool can cause performance degradation. The shared pool consists of the library cache and the data-dictionary cache.

The Library Cache

The library cache is used to store shared SQL. Here the parse tree and the execution plan for every unique SQL statement are cached. If multiple applications issue the same SQL statement, the shared SQL area can be accessed by each to reduce the amount of memory needed and to reduce the processing time used for parsing and execution planning.

The Data-Dictionary Cache

The data dictionary contains a set of tables and views that Oracle uses as a reference to the database. Oracle stores information here about the logical and physical structure of the database. The data dictionary contains information such as the following:

  • User information, such as user privileges
  • Integrity constraints defined for tables in the database
  • Names and data types of all columns in database tables
  • Information on space allocated and used for schema objects

The data dictionary is frequently accessed by Oracle for the parsing of SQL statements. This access is essential to the operation of Oracle; performance bottlenecks in the data dictionary affect all Oracle users. Because of this, you should make sure that the data-dictionary cache is large enough to cache this data. If you do not have enough memory for the data-dictionary cache, you see a severe performance degredation. If you ensure that you have allocated sufficient memory to the shared pool where the data-dictionary cache resides, you should see no performance problems.

The Program Global Area (PGA)

The PGA is a memory area that contains data and control information for the Oracle server processes. The size and content of the PGA depends on the Oracle server options you have installed. This area consists of the following components:

  • Stack space--This is the memory that holds the session's variables, arrays, and so on.
  • Session information--If you are not running the multithreaded server, the session information is stored in the PGA. If you are running the multithreaded server, the session information is stored in the SGA.
  • Private SQL area--This is an area in the PGA where information such as binding variables and runtime buffers is kept.

Processes

New Term: In many operating systems, traditional processes have been replaced by threads or lightweight processes. The term process is used in this book to describe a thread of execution, or a mechanism that can execute a set of code; process refers to the mechanism of execution and can refer to a traditional process or a thread.

The Oracle RDBMS uses two types of processes: user processes and Oracle processes (also known as background processes). In some operating systems (such as Windows NT), these processes are actually threads; for the sake of consistency, I will refer to them as processes.

User Processes

User, or client, processes are the user's connections to the RDBMS system. The user process manipulates the user's input and communicates with the Oracle server process through the Oracle program interface. The user process is also used to display the information requested by the user and, if necessary, can process this information into a more useful form.

Oracle Processes

Oracle processes perform functions for users. Oracle processes can be split into two groups: server processes (which perform functions for the invoking process) and background processes (which perform functions on behalf of the entire RDBMS).

Server Processes (Shadow Processes)

Server processes, also known as shadow processes, communicate with the user and interact with Oracle to carry out the user's requests. For example, if the user process requests a piece of data not already in the SGA, the shadow process is responsible for reading the data blocks from the datafiles into the SGA. There can be a one-to-one correlation between user processes and shadow processes (as in a dedicated server configuration); although one shadow process can connect to multiple user processes (as in a multithreaded server configuration), doing so reduces the utilization of system resources.

Background Processes

Background processes are used to perform various tasks within the RDBMS system. These tasks vary from communicating with other Oracle instances and performing system maintenance and cleanup to writing dirty blocks to disk. Following are brief descriptions of the nine Oracle background processes:

  • DBWR (Database Writer)--DBWR is responsible for writing dirty data blocks from the database block buffers to disk. When a transaction changes data in a data block, that data block need not be immediately written to disk. Therefore, the DBWR can write this data to disk in a manner that is more efficient than writing when each transaction completes. The DBWR usually writes only when the database block buffers are needed for data to be read. Data is written in a least recently used fashion. For systems in which asynchronous I/O (AIO) is available, there should be only one DBWR process. For systems in which AIO is not available, performance can be greatly enhanced by adding more DBWR processes.
  • LGWR (Log Writer)--The LGWR process is responsible for writing data from the log buffer to the redo log.
  • CKPT (Checkpoint)--The CKPT process is responsible for signaling the DBWR process to perform a checkpoint and to update all the datafiles and control files for the database to indicate the most recent checkpoint. A checkpoint is an event in which all modified database buffers are written to the datafiles by the DBWR. The CKPT process is optional. If the CKPT process is not present, the LGWR assumes these responsibilities.
  • PMON (Process Monitor)--PMON is responsible for keeping track of database processes and cleaning up if a process prematurely dies (PMON cleans up the cache and frees resources that might still be allocated). PMON is also responsible for restarting any dispatcher processes that might have failed.
  • SMON (System Monitor)--SMON performs instance recovery at instance startup. This includes cleaning temporary segments and recovering transactions that have died because of a system crash. The SMON also defragments the database by coalescing free extents within the database.
  • RECO (Recovery)--RECO is used to clean transactions that were pending in a distributed database. RECO is responsible for committing or rolling back the local portion of the disputed transactions.
  • ARCH (Archiver)--ARCH is responsible for copying the online redo log files to archival storage when they become full. ARCH is active only when the RDBMS is operated in ARCHIVELOG mode. When a system is not operated in ARCHIVELOG mode, it might not be possible to recover after a system failure. It is possible to run in NOARCHIVELOG mode under certain circumstances, but typically should operate in ARCHIVELOG mode.
  • LCKn (Parallel Server Lock)--Up to 10 LCK processes are used for interinstance locking when the Oracle Parallel Server option is used.
  • Dnnn (Dispatcher)--When the Multithreaded Server option is used, at least one Dispatcher process is used for every communications protocol in use. The Dispatcher process is responsible for routing requests from the user processes to available shared server processes and back.

How Transactions Work

New Term: To give you a better idea how Oracle operates, this section analyzes a sample transaction. Throughout this book, the term transaction is used to describe a logical group of work that can consist of one or many SQL statements and must end with a commit or a rollback. Because this example is of a client/server application, SQL*Net is necessary. The following steps are executed to complete the transaction:

1. The application processes the user input and creates a connection to the server via SQL*Net.

2. The server picks up the connection request and creates a server process on behalf of the user.

3. The user executes a SQL statement or statements. In this example, the user changes the value of a row in a table.

4. The server process checks the shared pool to see whether there is a shared SQL area that has this identical SQL statement. If it finds an identical shared SQL area, the server process checks whether the user has access privileges to the data. If so, the server process uses the shared SQL area to process the request. If a shared SQL area is not found, a new shared SQL area is allocated, and the statement is parsed and executed.

5. The server process finds the data in the SGA (if it is present there) or reads the data from the datafile into the SGA.

6. The server process modifies the data in the SGA. Remember that the server processes can read only from the datafiles. At some later time, the DBWR process writes the modified blocks to permanent storage.

7. The user executes either the COMMIT or ROLLBACK statement. A COMMIT will finalize the transaction, a ROLLBACK will undo the changes. If the transaction is being committed, the LGWR process immediately records the transaction in the redo log file.

8. If the transaction is successful, a completion code is returned across the network to the client process. If a failure has occurred, an error message is returned.


NOTE: A transaction is not considered committed until the write to the redo log file is complete. This arrangement ensures that in the event of a system failure, a committed transaction can be recovered. If a transaction has been committed, it is set in stone.

While transactions occur, the Oracle background processes do their jobs, keeping the system running smoothly. While this process occurs, hundreds of other users might be performing similar tasks. Oracle's job is to keep the system in a consistent state, to manage contention and locking, and to perform at the necessary rate.

This overview is intended to give you an understanding of the complexity and amount of interaction involved in the Oracle RDBMS. As you look in detail at the tuning of the server processes and applications later in this book, you can use this overview as a reference to the basics of how the Oracle RDBMS operates. Because of the differences in operating systems, minor variances in different environments will be discussed individually.

RDBMS Functionality

If the RDBMS is to operate, you must provide for certain functions, including data integrity, recovery from failure, error handling, and so on. This is accomplished via events such as checkpointing, logging, and archiving. The following sections list and describe some of these functions.

Checkpointing

You know that Oracle uses either the CKPT background process or the LGWR process to signal a checkpoint; but what is a checkpoint and why is it necessary?

Because all modifications to data blocks are done on the block buffers, some changes to data in memory are not necessarily reflected in the blocks on disk. Because caching is done using a least recently used algorithm, a buffer that is constantly modified is always marked as recently used and is therefore unlikely to be written by the DBWR. A checkpoint is used to ensure that these buffers are written to disk by forcing all dirty buffers to be written out on a regular basis. This does not mean that all work stops during a checkpoint; the checkpoint process has two methods of operation: the normal checkpoint and the fast checkpoint.

In the normal checkpoint, the DBWR merely writes a few more buffers every time it is active. This type of checkpoint takes much longer but affects the system less than the fast checkpoint. In the fast checkpoint, the DBWR writes a large number of buffers at the request of the checkpoint each time it is active. This type of checkpoint completes much quicker and is more efficient in terms of I/Os generated, but it has a greater effect on system performance at the time of the checkpoint.

You can use the time between checkpoints to improve instance recovery. Frequent checkpoints reduce the time required to recover in the event of a system failure. A checkpoint automatically occurs at a log switch.

Logging and Archiving

The redo log records all changes made to the Oracle database. The purpose of the redo log is to ensure that in the event of the loss of a datafile as a result of some sort of system failure, the database can be recovered. By restoring the datafiles back to a known good state from backups, the redo log files (including the archive log files) can replay all the transactions to the restored datafile, thus recovering the database to the point of failure.

When a redo log file is filled in normal operation, a log switch occurs and the LGWR process starts writing to a different redo log file. When this switch occurs, the ARCH process copies the filled redo log file to an archive log file. When this archive process has finished copying the entire redo log file to the archive log file, the redo log file is marked as available. It's critical that this archive log file be safely stored because it might be needed for recovery.


NOTE: Remember that a transaction has not been committed until the redo log file has been written. Slow I/Os to the redo log files can slow down the entire system.

What Affects Oracle Performance?

Because one of the roles of the DBA is to anticipate, find, and fix performance problems, you must know what types of things affect performance. To understand why these things affect performance, you must first review the basics of how a computer system works.

Overview of Computer Architecture

Your computer system consists of thousands of individual components that work in harmony to process data. Each of these components has its own job to perform, and each has its own performance characteristics.

The brainpower of the system is the Central Processing Unit (CPU), which processes all the calculations and instructions that run on the computer. The job of the rest of the system is to keep the CPU busy with instructions to process. A well-tuned system runs at maximum performance if the CPU or CPUs are busy 100% of the time.

So how does the system keep the CPUs busy? In general, the system consists of different layers, or tiers, of progressively slower components. Because faster components are typically the most expensive, you must perform a balancing act between speed and cost efficiency.

CPU and Cache

New Term: The CPU and the CPU's cache are the fastest components of the system. The cache is high-speed memory used to store recently used data and instructions so that it can provide quick access if this data is used again in a short time. Most CPU hardware designs have a cache built into the CPU chip. This internal cache is known as a Level 1 (or L1) cache. Typically, an L1 cache is quite small--8-16KB.

When a certain piece of data is wanted, the hardware looks first in the L1 cache. If the data is there, it's processed immediately. If the data is not available in the L1 cache, the hardware looks in the L2 cache, which is external to the CPU chip but located close to it. The L2 cache is connected to the CPU chip(s) on the same side of the memory bus as the CPU. To get to main memory, you must use the memory bus, which affects the speed of the memory access.

Although the L2 cache is twice as slow as the L1 cache, it's usually much larger. Its larger size means you have a better chance of getting a cache hit. Typical L2 caches range in size from 128KB to 4MB.

Slower yet is the speed of the system memory--it's probably five times slower than the L2 cache. The size of system memory can range from 4MB for a small desktop PC to 2-4GB for large server machines. Some supercomputers have even more system memory than that.

As you can see from the timeline shown in Figure 2.4, there is an enormous difference between retrieving data from the L1 cache and retrieving data from the disk. This is why you spend so much time trying to take advantage of the SGA in memory. This is also why hardware vendors spend so much time designing CPU caches and fast memory buses.

Component speed comparison.

CPU Design

Most instruction processing occurs in the CPU. Although certain intelligent devices, such as disk controllers, can process some instructions, the instructions these devices can handle are limited to the control of data moving to and from the devices. The CPU works from the system clock and executes instructions based on clock signals. The clock rate and type of CPU determine how quickly these instructions are executed.

The CPU usually falls into one of two groups of processors: Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC).

CISC Processors

CISC processors (like the ones Intel builds) are by far the most popular processors. They are more traditional and offer a large instruction set to the program developer. Some of these instructions can be quite complicated; most instructions require several clock cycles to complete.

CISC processors are complex and difficult to build. Because these chips contain millions of internal components, the components are extremely close together. The physical closeness causes problems because there is no room for error. Each year, technology allows more complex and faster chips to be built, but eventually, physics will limit what can be done.

CISC processors carry out a wide range of tasks and can sometimes perform two or more instructions at a time in parallel. CISC processors perform most tasks, such as RDBMS processing, very well.

RISC Processors

RISC processors are based on the principle that if you can reduce the number of instructions processed by the CPU, the CPU can be simpler to build and can run faster. By putting fewer internal components inside the chip, the speed of the chip can be accelerated. One of the most popular RISC chips on the market is the DEC Alpha.

The system compiler determines what instructions are executed on the CPU chips. When the number of instructions was reduced, compilers were written to exploit this and to compensate for the missing instructions.

By reducing the instruction set, RISC manufacturers have been able to increase the clock speed to many times that of CISC chips. Although the faster clock speed is beneficial in some cases, it offers little improvement in others. One effect of a faster CPU is that the surrounding components such as L2 cache and memory must also run faster at an increase in cost.

One goal of some RISC manufacturers is to design the chip so that the majority of instructions complete within one clock cycle. Some RISC chips can already do this. But because some operations that require a single instruction for a CISC chip might require many instructions for a RISC chip, a speed-to-speed comparison cannot be made.


CISC versus RISC

Both CISC and RISC processors have their advantages and disadvantages; it's up to you to determine whether a RISC processor or a CISC processor will work best for you. When comparing the two types of processors, be sure to look at performance data and not just clock speed. Although the RISC chips have a much faster clock speed, they do less work per instruction. The performance of the system cannot be determined by clock speed alone.


Multiprocessor Systems

Multiprocessor systems can provide significant performance with very good value. With such a system, you can start with one or two processors and add more as needed. Multiprocessors fall into several categories; two of the main types of multiprocessor systems are the Symmetric Multiprocessor (SMP) system and the Massively Parallel Processing (MPP) system.

SMP Systems

SMP systems usually consist of a standard computer architecture with two or more CPUs that share the system memory, I/O bus, and disks. The CPUs are called symmetric because each processor is identical to any other processor in terms of function. Because the processors share system memory, each processor looks at the same data and the same operating system. In fact, the SMP architecture is sometimes called tightly coupled because the CPUs can even share the operating system.

In the typical SMP system, only one copy of the operating system runs. Each processor works independently by taking the next available job. Because the Oracle architecture is based on many processes working independently, you can see great improvement by adding processors.

The SMP system has these advantages:

  • It's cost effective--The addition of a CPU or CPU board is much less expensive than adding another entire system.
  • It's high performing--Under most applications, additional CPUs provide an incremental performance improvement.
  • It's easily upgradable--Simply add a CPU to the system to instantly and significantly increase performance.

A typical SMP system supports between four and eight CPUs. Because the SMP system shares the system bus and memory, only a certain amount of activity can occur before the bandwidth of the bus is saturated. To add more processors, you must go to an MPP architecture.

MPP Systems

MPP systems are based on many independent units. Each processor in an MPP system typically has its own resources (such as its own local memory and I/O system). Each processor in an MPP system runs an independent copy of the operating system and its own independent copy of Oracle. An MPP system is sometimes called loosely coupled.

Think of an MPP system as a large cluster of independent units that communicate through a high-speed interconnect. As with SMP systems, you will eventually hit the bandwidth limitations of the interconnect as you add processors. However, the number of processors with which you hit this limit is typically much larger than with SMP systems.

If you can divide the application among the nodes in the cluster, MPP systems can achieve quite high scalability. Although MPP systems can achieve much higher performance than SMP systems, they are less economical: MPP systems are typically much higher in cost than SMP systems.

CPU Cache

Regardless of whether you use a single-processor system, an SMP system, or an MPP system, the basic architecture of the CPUs is similar. In fact, you can find the same Intel processors in both SMP and MPP systems.

As you learned earlier today, the system cache is important to the system. The cache allows quick access to recently used instructions or data. A cache is always used to store and retrieve data more quickly than the next level of storage (the L1 cache is faster than the L2 cache, the L2 cache is faster than main memory, and so on).

By caching frequently used instructions and data, you increase the likelihood of a cache hit. This can save precious clock cycles that would otherwise have been spent retrieving data from memory or disk.

System Memory Architecture

The system memory is basically a set of memory chips, either protected or not protected, that stores data and instructions used by the system. System memory can be protected by parity or by a more sophisticated advanced ECC correction method. Data parity will detect an incorrect value in memory and flag it to the system. An advanced ECC correction method will not only detect an incorrect value in memory, but in many cases can correct it. The system memory can range in size from 4MB on a small PC to 4GB on a large SMP server.

Typically, the more memory available to Oracle, the better your performance. Allocation of a large SGA allows Oracle to cache more data, thus speeding access to that data.

New Term: System memory is accessed by the CPUs through a high-speed bus that allows large amounts of data and instructions to be quickly moved from the CPU to L2 cache. Data and instructions are typically read from memory in large chunks and put into the cache. Because the CPU expects that memory will be read sequentially, in most cases it will read ahead the data or instruction that it thinks will be needed next. Sometimes this works, so the data that is needed next is already in cache; sometimes the CPU has guessed incorrectly and other data needs to be retrieved. This process of prereading the data is known as prefetching.

Depending on the specific implementation of an SMP system, the memory bus might be shared by all system processors; alternatively, each processor might have a private bus to memory.

Virtual Memory System

New Term: In a virtual memory system, the OS and hardware allow programs and users to use more memory than is actually available in the system hardware. This memory, known as virtual memory, can be mapped to physical memory. Code or data that is being run by the CPU must reside in physical memory. If a program or data that is larger than physical memory is being accessed, the parts of code and data that are not immediately needed by the program can reside in virtual memory, not physical memory. As that bit of code or data is needed, it can be copied into physical memory, and parts no longer needed can be copied to disk. The process of mapping virtual memory onto physical memory by copying the memory to and from disk is called paging or swapping (depending on the OS architecture).

Both paging and swapping serve the same purpose, but each operates slightly differently from the other. In a swapping system, an entire process is swapped out (moved from memory to disk) or swapped in (moved from disk to memory). In a paging system, the movement of data to and from the secondary storage occurs on a memory page basis; when more memory is needed, one or more pages is paged out (moved from memory to disk) to make room. A memory page is the smallest unit of memory that is used in the operating system. A typical memory page size is 4KB. If data is requested from virtual memory and is not in physical memory, that data is paged in (moved from disk to memory) as needed. The rest of this section uses the term paging to describe both paging and swapping.

Suppose you have a computer system with 16MB of physical memory. If you have a program that needs to access 20MB of data, it obviously won't fit in physical memory. In a virtual memory system, the data is read until little memory remains (the OS reserves some for itself), then the OS copies some of the data pages to disk with the paging mechanism. This is usually done using a least recently used algorithm in which the oldest data is moved out. When some memory has been freed, the program can read more data into memory. As far as the program is concerned, all the data is still in memory; in fact, it is--in virtual memory. As the program begins to reread some of the data and manipulate it, different pieces might be paged in (from disk to physical memory) and paged out (from physical memory to disk).

As you can imagine, paging in or out can be time consuming and uses a lot of system resources. This is why I warn you several times in this book to avoid using so much memory that you cause paging or swapping. Access to disk is approximately 50 times slower than access to memory.

Bus Design

New Term: Simply put, bus is a connection path used by the system to move data from one place to another. Buses get complicated when you look at them from a performance perspective: Capacity, or bandwidth, becomes an issue. Over the years, the term bandwidth, which was originally used to describe the electronic characteristics of a circuit, has been adopted by computer designers. In this case, bandwidth refers to the amount of data that can be transmitted across a bus in a certain time.

Several bus designs have been introduced in the last few years, all with the same goal: increased capacity. As processors, network hardware, disk controllers, and disks become increasingly fast, buses must develop to support the load generated by these devices. Thankfully, as computers have increased in performance, computer designers have improved bus designs to accommodate these changes. The system bus should not be a bottleneck in your system.

Oracle Resources

The Oracle DBMS allocates different resources for various different functions, including the allocation of system memory. The memory might be allocated for database caching or for the data dictionary or library cache. The careful balance of this precious resource is very important in tuning the Oracle RDBMS.

As much data as possible must be cached to avoid the additional cost of going to disk. If you allocate a large Oracle data cache, a higher cache-hit rate can be achieved. A high cache-hit rate indicates that a large percentage of requested data is found in the Oracle cache rather than retrieved from disk.

Application Design

Application design can affect performance more than any other factor. In most cases, performance can be severely degraded by an application that does not have well-tuned SQL statements or does not use indexes. A good application design can also significantly improve performance. The application is typically the first place to look when you experience system performance problems.

If a database is built with indexes on a certain set of columns but those columns are not specified in the WHERE clause of the SQL statement, the index probably won't be used. It's not enough to create the correct index on tables; you must ensure that the indexes are used.


TIP: It's wise to create a specification identifying the tables and indexes in your database. That way, the application developers and the team that creates the database have a crystal-clear document that identifies which columns are indexed. This can help avoid confusion and allow the application code to fully exploit the indexes.

Oracle Features

Another way to improve Oracle performance is to enable Oracle performance features. Among the most important of these features (and my personal favorite) is the Oracle Parallel Query option. Other Oracle performance features include partitioned tables and the Oracle index-only table, both new in Oracle8.

The Oracle Parallel Query Option

The Oracle Parallel Query option allows parallelism of many different operations, which greatly enhances performance. The Oracle Parallel Query option consists of several different components, including

  • Parallel query
  • Parallel index creation
  • Parallel recovery
  • Parallel table creation
  • Parallel index tables

Parallel Query

The Oracle parallel query allows a single query to be divided into components and run in parallel. Because a query spends much of its time waiting for I/O operations to complete, parallelizing queries can greatly improve performance. In a well-tuned system where I/O is not a problem, parallel queries can run many times faster than normal queries. Statements that can be parallelized include

  • Table scans
  • Sorts
  • Joins


NOTE: You might be wondering why parallelizing operations would help performance; after all, the work must still be done. In a typical Oracle operation (for example, a SELECT statement), the following steps occur:
1. Oracle performs some CPU processing to determine what data is needed.

2. Oracle submits an I/O request to disk (assuming that the data is not already in the SGA) and then waits for that I/O to complete.

3. This operation is repeated until all data is retrieved.

In the case of a parallel query, these steps would be adjusted like so:

1. Oracle performs some CPU processing to determine the query operation.

2. Different Oracle processes or threads receive their instructions on what data is needed.

3. Oracle thread 1 submits an I/O request to disk (if that data is not already in the SGA) and waits for that I/O to complete.

4. Oracle thread 2 submits an I/O request to disk (if that data is not already in the SGA) and waits for that I/O to complete.

5. Oracle thread 3 submits an I/O request to disk (if that data is not already in the SGA) and waits for that I/O to complete.

As shown here, that the time-consuming job of retrieving data from disk is duplicated, thus improving performance. This parallelism allows the CPU(s) to be utilized while other threads are waiting for I/Os.

Retrieving data from disk is a slow process compared to the activity of the CPU, and your goal is to keep the CPUs busy. Because a significant part of any Oracle operation involves CPU processing and I/Os, it is possible and desirable to keep the CPUs busy while many I/Os are being processed simultaneously. This is the main goal of the Parallel Query option.


Parallel Index Creation

Index creation involves reading from data tables and then writing to the index tables. Because the parallel query allows reading of tables to be accelerated, the index-creation process is sped up. Index creations can be quite time consuming, so this can be a real advantage.

Parallel Recovery

Recovery from a system failure can be quite time consuming. During recovery, users must usually wait for the system to come back online, so any improvement in performance is an advantage. Parallel recovery can speed the recovery process by parallelizing the read from the redo log files, and the roll forward and rollback process.

Parallel Table Creation

Although the Oracle Parallel Query option does not generally allow table creations to occur, it is often the case when a table is created as a subset of other tables. Data is often reduced from several large tables into a smaller subset, and this parallelism can be beneficial. In such instances, the following statement allows for parallelism:

CREATE TABLE table_name AS SELECT...

Oracle Index Tables

New to Oracle8, the index table allows indexes and tables to be stored together; this saves space and improves performance by reducing disk I/O. If you reduce the number of required disk I/Os, data can be accessed much faster.

OS Resources

New Term: In most systems, few resources can be allocated in the operating system. Most OS parameters are changed only to allocate sufficient resources to Oracle; additional resources usually do not improve performance. A lack of resources, however, can decrease performance. OS resources often refers to system memory or, in the case of UNIX, shared memory. Other OS resources and tunables include network buffers and disk I/O tunables.


TIP: Windows NT is fairly self tunable, but there are a few things, relating primarily to configuration, to look out for:
  • Remove unnecessary network protocols--Depending on how the system is configured, several network protocols that you do not use might be configured into your system. These extra protocols use CPU and memory resources.
  • Configure the protocols you use in order from most-often used to least-often used--This reduces some of the overhead associated with traversing the infrequently used protocols.
  • Keep a close eye on paging--Windows NT treats all memory as virtual. The best way to determine whether your system is paging is to watch Pages/Sec in NT's perfmon. If paging occurs, lower the amount of memory allocated to Oracle.

Hardware

Several hardware factors can affect your system's performance. These factors include

  • Memory capacity
  • Number of CPUs
  • CPU cache
  • Memory-bus bandwidth
  • I/O capacity

Memory Capacity

Earlier today you saw an overview of how the system hardware operates. Clearly, any operation that must access slower components, such as a disk or network, will slow down processing. Therefore, it is important that you have sufficient memory in your system.

New Term: Most hardware architectures are limited to 4GB of physical memory, but some architectures on the market support much more. These architectures are said to support a VLM, or Very Large Memory, architecture. Soon it will be possible to support hundreds of gigabytes of physical memory in a system, allowing for very fast RDBMS operations.

System memory is allocated to Oracle and used for database caching, user memory, and the shared pool, which is used for both the data dictionary and the library cache. You must have enough memory for the shared pool because an insufficient shared pool can hurt performance. When the shared pool is satisfied, the more database buffers you can allocate to the DBMS the better. Be careful, though, to avoid starving the PGA memory needed by your processes, and avoid paging at all costs. You can never have too much memory in your system. Anything that can be cached will reduce system I/O, improving performance.

Number of CPUs

Oracle typically scales well with additional CPUs. By adding CPUs you can see significant performance improvement with little additional cost. Some factors that determine how much improvement you will see by adding more processors are the CPU cache and memory- bus bandwidth.

CPU Cache

A large CPU cache allows more data and executable code to be stored on the local processor than in memory. This reduces the number of times the CPU must access main memory. Whenever the CPU accesses memory, a slowdown occurs while the CPU waits for that data or code to be retrieved. It is especially bad when the memory bus is busy; the CPU waits even longer until the bus becomes free.

Memory-Bus Bandwidth

The memory-bus bandwidth determines how quickly data can be transferred between CPU to memory. If the memory bus is busy when data or code is needed, a CPU stalls waiting for the bus to free. This can severely degrade performance in a multiprocessor computer. A fast memory bus can reduce this problem. A large CPU cache can also reduce this problem by allowing more data and code to be cached.

I/O Capacity

I/O is typically one of the biggest factors limiting system performance. Because most DBMS operations involve retrieving data from disk, I/O can be a limiting factor if you do not have adequate capacity for your system load. Fortunately, you can usually solve this problem by carefully configuring your system for proper I/O distribution and by having sufficient I/O capacity. Simply having adequate disk space is insufficient; you must also have enough disk drives to support the number of disk I/Os that the system requires.

Oracle8 New Features

Oracle8 has introduced many new features, and I would like to focus on a few key features for the Oracle8 DBA:

  • Partitioned objects
  • Improved parallelism
  • New index types
  • Enhanced recovery features

Partitioned Objects

Partitioned objects allow Oracle objects, such as tables and indexes, to be broken into smaller, more manageable pieces. Partitioning these objects allows many operations that could normally be performed on only a table or an index to be divided into operations on a partition. By dividing these operations, you can often increase the parallelism of those operations, thus improving performance and minimizing system downtime.

Partitions are enabled via the PARTITION BY RANGE parameter of the CREATE TABLE statement. In this manner, ranges of data are assigned to each individual partition like so:

CREATE TABLE emp
(
name CHAR(30),
address CHAR(40),
region INTEGER
)
PARTITION BY RANGE ( region)
(
PARTITION VALUES LESS THAN (10) TABLESPACE tbl0,
PARTITION VALUES LESS THAN (20) TABLESPACE tbl1,
PARTITION VALUES LESS THAN (30) TABLESPACE tbl2
);

This creates a table with partitioning, as shown in Figure 2.5.

Figure 2.5.

Table partitioning.

Partitioning is recommended for large tables because it makes them much more manageable. Oracle does not currently support partitioning of clusters. By partitioning a table, you can break that large table into several much smaller pieces. A partitioned table can take advantage of some of the following features:

  • Partitioned DML
  • Exporting/importing by partition
  • Range partitioning
  • Local and global indexing
  • Parallel loading by partition

Partitioned DML

Parallel INSERT, DELETE, and UPDATE operations can occur on a partition basis. Using partitions allows these operations to be conducted either globally or locally within a partition.

Exporting/Importing by Partition

Partitioning allows operations such as exports and imports to be performed on a partition basis. This can reduce the time required by some maintenance operations, such as reorganization of data or reclustering. This also allows you to change the physical layout of your database on a partition basis. If you limit the scope of export and import operations, they can benefit from a large degree of parallelism.

Range Partitioning

Range partitioning is a method whereby the partitioning of data is done based on the value of the data itself. This allows for tremendous flexibility in distributing data based on ranges of data values. Range partitioning allows you to partition high-volume data separately from low-volume data or to separate current from old data.

Local and Global Indexing

New Term: A local index indexes data that resides in only one partition. A global index indexes data that resides on more than one partition. This allows for great flexibility in terms of adding new indexes, reducing index sizes, and allowing for partition independence.

An example of where local indexing might be beneficial is a table where sales records are stored. Using table and index partitioning, you can store data and indexes separately based on calendar months; doing this allows reduced index size and faster index lookups for entries of a particular month. If you partition these entries you can add new months and delete outdated entries without reindexing the entire table. You could keep 12 months of partitions and indexes online in this manner.

Parallel Loading by Partition

With a partitioned table, SQL*Loader can either load an entire table in parallel by partition or simply load a single partition. Either method provides great flexibility.

If you use the conventional path load, the loader automatically distributes the data to the correct partition and updates the local and global indexes. You can also use the loader to load a partitioned table or a partition of a table. Again, indexes are built automatically. It is also possible to direct-load a partition in parallel provided that no global indexes exist, but you must rebuild the local indexes yourself.

Improved Parallelism

The arrival of Oracle8 has heralded tremendous improvement in the area of parallelization. In addition to the new parallel features listed previously, some existing parallel operations have been extended.

Parallel recovery has been improved by allowing rollbacks of parallel DML operations that have failed to be performed in parallel. This parallel transaction recovery is supported on transaction and process failures but not during instance recovery.

New parallel hints have been added for parallel insert operations. The APPEND hint tells the optimizer to append the insert data beyond the high water mark of the segment.

New Index Types

The index-only table is new in Oracle8. With traditional indexes and tables, data and indexes are stored separately. With an index-only table, the data to which the index refers is stored in the leaf block or lowest level block of the index, so the data and indexes are stored together. Depending on your application, this can be an advantage.

Applications that access data primarily via a key value can see an advantage from the use of index-only tables. Because the data is stored within the index, the data is immediately available when the index has reached its lowest level. This can speed data retrieval.

Applications that do not access data primarily via a key value will see no improvement; indeed, performance will likely be degraded in these applications. Any application that involves table scans or requires multiple indexes will not benefit from the index table. The index table is covered in much more detail on Day 13.

Enhanced Recovery Features

Oracle has made tremendous improvements in the areas of backup and recovery. Most of these new features revolve around the Recovery Manager. Another recovery feature in Oracle8 is the image copy backup, which can improve recovery time in the event of a failure.

Recovery Manager

New Term: Recovery Manager is an online utility designed to assist the DBA with all backup and recovery operations. Not only does it perform the backup and recovery, it maintains a database called the recovery catalog that stores information about these operations.

Image Copy Backup

An image copy backup essentially allows you to copy a datafile to another place on disk or to another disk on your system. In the event of a failure, no recovery is necessary from the image copy; you must simply switch to that backup copy. You must, however, perform a recovery to make that copy current. In the event of a failure, this might be the fastest way to recover.

Oracle Products

As part of the overview of the Oracle system, I would like to briefly cover the optional available Oracle products. Although many of these products are covered elsewhere in this book, you should at least aware of their existence. The Oracle product line is divided into three areas:

  • The Oracle server
  • Development tools
  • Applications

The Oracle Server

The Oracle server is the DBMS itself, and includes many options and features such as the Parallel Query option, network protocols, and advanced system administration options. Some of the key options available to the Oracle server include

  • Enterprise Manager--This option is fairly new to Oracle, and consists of the management console and intelligent agents. The management console, which is the core element in Oracle's new graphical administrative package, runs only on Windows NT, but can manage any Oracle server. The console allows the DBA to graphically control one or more Oracle systems. The console can be used to configure and manage Oracle instances as well as to diagnose problems and can be configured to alert the DBA in the event of a problem. The keys to Enterprise Manager are the intelligent agents, which run on the Oracle server and provide the communication layer necessary for the console to communicate with these systems. The intelligent agents use industry-standard SNMP (Simple Network Management Protocols) to communicate with the console, thus allowing for future expansion.


NOTE: Throughout this book, Enterprise Manager is referenced as the primary method for administering the system. Nonetheless, command-line management is also covered.
  • ConText--When integrated with any text system, Oracle ConText can analyze, filter, and reduce text for speed reading and summary viewing. Oracle ConText returns detailed assessments of the text it processes, checking for grammatical errors and rating the quality and style of the writing.
  • Media Server--Oracle Media Server provides high-performance, scalable, and reliable multimedia library functions on a wide variety of general-purpose systems. Media Server handles the storage, retrieval, and management of movies, music, photographs, and text articles.
  • The Spatial Data option--The Oracle Spatial Data option can be used to manage a database that contains spatial data. This option allows for the storage of spatial or geographical data. If you store the spatial data within the database, the complexity of managing the storage is reduced and the performance is increased.
  • The Oracle Web server--The Oracle Web server is designed to provide front-end services to allow World Wide Web access to an Oracle database. This product allows Web users to retrieve information directly from an Oracle database rather than from traditional flat files. This product can be used to enhance the performance and functionality of your Web server via the use of indexes and data caching. With the flexibility of the Oracle RDBMS, the functionality of your Web server can be enhanced via the use of language-sensitive context and other features.
  • The Internet Commerce server--The Internet Commerce server is a complete set of tools designed to help you create, run, and administer an Oracle system that is used for Web commerce. Because it is based on the proven technology of the Oracle server, the system can provide these services in a robust and secure fashion.

Development Tools

One of Oracle's strongest points has been its development tools. Not only are these tools robust and full featured, they are flexible as well. When client/server systems became popular in the early 1990s, the Oracle tools quickly adapted. When HTML and Java applications became popular in the mid-1990s, the Oracle development tools quickly adapted yet again. The adaptability of these tools guarantees that applications developed with them can be quickly adjusted for new uses and technologies. Oracle provides the following tools:

  • Designer/2000--This set of modeling tools reduces some of the pain associated with designing systems. These tools, which help with process and data modeling, can be used to provide input into the Developer/2000 system and to develop the fundamental models that are the foundation for your business processes.
  • Developer/2000--This set of tools allows you to create an application and roll it out in Windows, Macintosh, Motif, and character mode. Developer/2000 incorporates graphics and images as well as support for multimedia objects such as video and sound in a variety of standard formats.
  • Discoverer/2000--This data-analysis tool supports querying, reporting, and the graphical multidimensional analysis of the data warehouse. Its key features include graphical-representation and drill-down features.
  • Power Objects--This lightweight, GUI development tool, which is available for Windows, Macintosh, and OS/2, allows the quick development of applications that use relatively small system resources. Power Objects is conceptually similar to Developer/2000, but lacks many of Developer/2000's features.
  • Objects for OLE--This set of tools allows you to link OLE-compliant applications to an Oracle RDBMS. This tool provides a quick and easy way to exploit the power of applications such as spreadsheets. Objects for OLE also allows easy linking of database tables into word-processing documents.
  • Programmer/2000--This suite of tools helps with the development of SQL, PL/SQL, and stored procedures. These tools can be helpful for application developers.
  • Media Objects--Oracle's lightweight tool for developing multimedia applications, Media Objects supports client/server, CD-ROM, and interactive television processes.
  • Database Designer--This lightweight version of the Oracle Designer/2000 product can assist in the design and creation of databases. Database Designer, a single-user tool, graphically designs the database tables and generates SQL that can be used to create this database.

Applications

Oracle's application software falls into two main categories: traditional applications and newer OLAP (Online Analytical Processing) applications.

Traditional Oracle Applications

Oracle's suite of traditional applications is used to perform basic and essential business tasks. These applications are used by many of the world's largest companies. The suite provides support for the following areas:

  • Financial
  • Human resources
  • Project management
  • Sales
  • Manufacturing

OLAP Applications

The OLAP applications provide a graphical interface for DSS and data-warehousing applications. These tools lend a multidimensional model to the database, providing analysis, forecasting, and statistical operations.

Other Products

Oracle offers many other products that are not mentioned here. These products handle various tasks such as networking, office automation, workgrouping, and so on. Although these products and services are too numerous to cover here, rest assured that Oracle's full line can handle most (if not all) of your database and communication needs.