TIP #45 : Performance tuning PL/SQL with profiler.
This post shows how to use it.
For using dbms_profiler, first it should be installed.
Install dbms_profile and populate tables.
- @?/rdbms/admin/profload.sql (Install dbms_profiler)
- @?/rdbms/admin/proftab.sql (Populate profiler tables).
How to trace :
- exec dbms_profiler.start_profiler('Sample1');
- Run PL/SQL code (it would be better to be proc/func/package instead of anonymous code.
- exec dbms_profiler.stop_profiler;
- exec DBMS_PROFILER.FLUSH_DATA;
How to see the result :
- Which runid is for current trace
SELECT runid,
run_date,
run_comment,
run_total_time
FROM plsql_profiler_runs
ORDER BY runid;
- Total elapsed time of each PL/SQL code.
SELECT u.runid,
u.unit_number,
u.unit_type,
u.unit_owner,
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
d.min_time,
d.max_time
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid AND u.unit_number = d.unit_number
WHERE u.runid = &1
ORDER BY u.unit_number, d.line#;
FYI : Also putting dbms_utility.get_time(); in PL/SQL code would show exact run time execution of each PL/SQL pieces.
TIP # 44 : Performance tips for PL/SQL code.
- In condition put least expensive condition first. (It stops when evalutes it)
- Minimize implicit conversion. Try not to use mix of datatype . (using number and pls_integer)
- Use pls_integer when you can. (Best performance)
- Use binary_float or binary_double for floatingpoint
- Mostly allocate varchar2 of size >4000, because Oracle waits for allocating it in memory at runtime (Saving memory).However for varchar2<4000,>
- Put relevant subprogram into a package, because calling a function/procedure of it will load whole code in memory which can be referenced later.
- Pin frequent used pacage to prevent it from aging out.(dbms_shared)
- If using out parameter, it adds some performance overhead to make sure that in the case of any unhandled exception out parameter keep its original value after back to original program. If it is not important, user out ncopy to reduce overhead. (true for funtion with big out collection or LOB out).
- If you are running SQL statements inside a PL/SQL loop, FORALL statement as a way to replace loops of INSERT, UPDATE, and DELETE statements.
- If you are looping through the result set of a query, BULK COLLECT clause of the SELECT INTO statement as a way to bring the entire result set into memory in a single operation.
- Prevent unnecessary function call
for item in (select distinct(sqlrt(empno)) from emp) ===> change to ===>
for item in (select sqlrt(empno) from (select distinct empno from emp))
Example :
=== Poor performance
begin
For myvar in (select empno from emp) loop
if (myvar.empno>1000) then
delete from emp2 where empno = myvar.empno;
end if;
end loop;
end;
=== Good performance
declare
type mytype is table of number index by pls_integer;
myvar mytype;
i pls_integer:=1;
begin
select empno bulk collect into myvar from scott.emp where empno>1000;
forall i in myvar.first..myvar.last delete from scott.emp2 where empno=myvar(i);
end;
==== Forall with exception %BULK_ROWCOUNT(i), SQL%BULK_EXCEPTIONS.COUNT <==== Only works with forall
declare
type mytype is table of number(10) index by pls_integer;
myvar mytype;
i pls_integer:=1;
errors number;
begin
select empno bulk collect into myvar from scott.emp where empno>1000;
forall i in myvar.first..myvar.last save exceptions delete from scott.emp2 where empno=myvar(i);
exception WHEN others THEN -- Now we figure out what failed and why.
errors := SQL%BULK_EXCEPTIONS.COUNT;
DBMS_OUTPUT.PUT_LINE('Number of statements that failed: ' errors);
FOR i IN 1..errors LOOP
DBMS_OUTPUT.PUT_LINE('Error #' i ' occurred during ''iteration #' SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
DBMS_OUTPUT.PUT_LINE('Error message is ' SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
end;
TIP #43: Effective DOP.
However the big question is if query runs with determined DOP (Degree Of Parallelism) or Oracle downgrades it to lower DOP because of low in resources.
Doug Burns has a fully analysis in his blog. (http://oracledoug.com/serendipity/index.php?/archives/1231-How-can-I-tell-the-actual-DOP-used-for-my-Parallel-Query.html).
From reading Doug `s blog and different articles , I think that the best way to determine whether or not a session is using determined DOP is to run the following query.
select n.name, s.value
from v$sesstat s, v$statname n
where
s.statistic# = n.statistic#
and
n.name like 'Parallel%'
and sid=&sid_val;
If Parallel operations not downgraded <>0 , it means that session is running with determined DOP. However if Parallel operations downgraded to serial <>0 , it means that query is not running in parallel.
Parallel operations downgraded 75 to 99 pct , Parallel operations downgraded 50 to 75 pct, Parallel operations downgraded 25 to 50 pct and Parallel operations downgraded 1 to 25 pct determines if DOP is downgraded to lower value and in what percentage.
TIP #42: Package compilation takes long time
When I checking the compilation session, It was waiting for 'Library cache lock ' and 'Library cache pin'.
Reason : This package was security major package which is used by most sessions.The compiliation did hang because other sessions were executing procedure/function of this package.
Found metalink note 122793.1 very useful which basically recommends two options.
Option 1. Found blocking sessions with enabling trace.
- Run hanging compilation session and find out its process id (PID)
- Login with new session and enable tracing with
ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME SYSTEMSTATE LEVEL 10';
- Check user_dump_dest for trace , find process which match with PID of hanging session (Assume PID of hanging session is 8). Find handle address and find other PID with the same handle.In the following example PID 8 is hanging session and PID 9 is blocking session. With having PID=9, more info about blocking session can be found.
PROCESS 8:
----------------------------------------
.
.
.
.
waiting for 'library cache lock' blocking sess=0x0 seq=253 wait_time=0
handle address=5023ef9c, lock address=5019cad4, 10*mode+namespace=15
PROCESS 9:
----------------------------------------
.
.
.
LIBRARY OBJECT PIN: pin=5019d5e4 handle=5023ef9c mode=X lock=0
user=5005fad4 session=5005fad4 count=1 mask=0511 savepoint=118218 flags=[00]
2. Find blocking sessions with running a query
Run the following query. Basically X$KGLLK has all library cache lock and KGLLKREQ > 0 means that lock was requested by a session but session did not get it .
This query lists all blocking sessions which blocked hanging session.
SELECT SID,USERNAME,TERMINAL,PROGRAM FROM V$SESSION
WHERE SADDR in
(SELECT KGLLKSES FROM X$KGLLK LOCK_A
WHERE KGLLKREQ = 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKED SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ > 0)
);
TIP #41: Estimate backup size (2)
I ran the following steps in 10.2.0.2 on AIX5 64bit. It seems that RMAN behavior in 10.2.0.2 is closer to reality however it is not 100% precise to database size .
To sum up I would say that Before 10.2.0.2 , backup size can not be the same size as dba_segments (RMAN only does not backup never touched blocks), while in 10.2.0.2 backup size of full database is roughly the same as database.
Sample test in 10gR2. (Same scenario as 10.1).
=== Create new tablespace
SQL> create tablespace rmantst datafile '/o022/test/rmantst.dbf' size 200M;
Tablespace created.
-- Taking RMAN backup from empty tablespace
run{
backup tablespace rmantst format '/o022/test/backup_after_ts_creation_%s.bak';
}
Starting backup at 15-APR-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=533 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00028 name=/o022/test/rmantst.dbf
channel ORA_DISK_1: starting piece 1 at 15-APR-07
channel ORA_DISK_1: finished piece 1 at 15-APR-07
piece handle=/o022/test/backup_after_ts_creation_4.bak tag=TAG20070415T112434 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 15-APR-07
Starting Control File and SPFILE Autobackup at 15-APR-07
piece handle=/o022/backup/control_back_c-1721358348-20070415-01 comment=NONE
Finished Control File and SPFILE Autobackup at 15-APR-07
-- Check backup size
oracle(/o022/test):ls -l
total 410064
-rw-r----- 1 oracle dba 209723392 Apr 15 11:24 rmantst.dbf
-rw-r----- 1 oracle dba 106496 Apr 15 11:24 backup_after_ts_creation_4.bak (Backup of empty tablespace is only 106KB).
---- Create some objects in rmantst tablespace
create table rmantest2 tablespace rmantst as select * from dba_objects;
create table rmantest3 tablespace rmantst as select * from dba_objects;
create table rmantest4 tablespace rmantst as select * from dba_objects;
create table rmantest5 tablespace rmantst as select * from dba_objects union all select * from dba_objects union all select * from dba_objects union all select * from dba_objects;
create table rmantest6 tablespace rmantst as select * from dba_objects union all select * from dba_objects union all select * from dba_objects union all select * from dba_objects;
create table rmantest7 tablespace rmantst as select * from dba_objects union all select * from dba_objects union all select * from dba_objects union all select * from dba_objects;
SQL> select sum(bytes)/1024/1024 from dba_data_files where tablespace_name='RMANTST';
SUM(BYTES)/1024/1024
--------------------
200
SQL> select sum(bytes)/1024/1024 from dba_segments where tablespace_name='RMANTST';
SUM(BYTES)/1024/1024
--------------------
112 <== 112MB of 200MB was allocated.
--- Take tablespace backup
RMAN> run
2> {
backup tablespace rmantst format '/o022/test/backup_after_tbl_creation_%s.bak';
}
3> 4>
Starting backup at 15-APR-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=488 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00028 name=/o022/test/rmantst.dbf
channel ORA_DISK_1: starting piece 1 at 15-APR-07
channel ORA_DISK_1: finished piece 1 at 15-APR-07
piece handle=/o022/test/backup_after_tbl_creation_6.bak tag=TAG20070415T113143 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 15-APR-07
Starting Control File and SPFILE Autobackup at 15-APR-07
piece handle=/o022/backup/control_back_c-1721358348-20070415-02 comment=NONE
Finished Control File and SPFILE Autobackup at 15-APR-07
RMAN> exit
Recovery Manager complete.
oracle(/o022/test):ls -l
total 634320
-rw-r----- 1 oracle dba 114786304 Apr 15 11:31 backup_after_tbl_creation_6.bak (Backup size is ~ 114M)
-rw-r----- 1 oracle dba 106496 Apr 15 11:24 backup_after_ts_creation_4.bak
-rw-r----- 1 oracle dba 209723392 Apr 15 11:31 rmantst.dbf
-- Remove data from all objects in tablespace.
Table truncated.
SQL> truncate table rmantest1;
Table truncated.
SQL> truncate table rmantest3;
Table truncated.
SQL> truncate table rmantest4;
Table truncated.
SQL> truncate table rmantest5 ;
Table truncated.
SQL> truncate table rmantest6;
Table truncated.
SQL> truncate table rmantest7;
Table truncated.
SQL> select sum(bytes)/1024/1024 from dba_data_files where tablespace_name='RMANTST';
SUM(BYTES)/1024/1024
--------------------
200
SQL> select sum(bytes)/1024/1024 from dba_segments where tablespace_name='RMANTST';
SUM(BYTES)/1024/1024
--------------------
.4375 <== only 400K was allocated.
-- Take RMAN backup
run
{
backup tablespace rmantst format '/o022/test/backup_after_tbl_truncate_%s.bak';
}
oracle(/o022/test):ls -ltr
total 641792
-rw-r----- 1 oracle dba 106496 Apr 15 11:24 backup_after_ts_creation_4.bak
-rw-r----- 1 oracle dba 114786304 Apr 15 11:31 backup_after_tbl_creation_6.bak
-rw-r----- 1 oracle dba 209723392 Apr 15 11:35 rmantst.dbf
-rw-r----- 1 oracle dba 3825664 Apr 15 11:35 backup_after_tbl_truncate_8.bak ( Backup after truncate is only 3MB. It is not exactly the same as dba_segments but it is not as bad as 10.2.0.1).
TIP #40 : Different options for creating table from select
My goal was to create table as quickest as possible.
Here are different optins for create table as select statement
- Simple create as select (CTS) with no option
- CTS with nologging option
- CTS with unrecoverable option
- CTS with parallel option
- CTS with parallel and nologging option
Nologging or unrecoverable option create redo log as minimum as possible. (Database should be in no force logging mode).
The following shows practical example of above option.
========== Option 1 : Simple CTS =======================
SQL> create table tbl1 as select * from dba_segments;
Table created.
Elapsed: 00:00:43.04
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 405476 <== 405K redo was generated
========== Option 2 :Nologging option =======================
SQL> create table tbl2 nologging as select * from dba_segments;
Table created.
Elapsed: 00:00:42.06 <== Not major improvement in SQL execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 436128 <== Only 30K redo logs were generated. (Major improvement in redo generation)
========== Option 3 :Unrecoverable option =======================
SQL> create table tbl3 unrecoverable as select * from dba_segments;
Table created.
Elapsed: 00:00:45.00 <== Not major improvement in SQL execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 466688 <== Only 30K redo logs were generated. (Major improvement in redo generation)
========== Option 4 :Parallel option =======================
SQL> create table tbl4 parallel (degree 4) as select * from dba_segments;
Table created.
Elapsed: 00:00:14.09 <== Major improvement in SQL execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 894304 <== 400K redo generation as normal
SQL> create table tbl5 parallel as select * from dba_segments;
Table created.
Elapsed: 00:00:16.03 <== Major improvement in SQL execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 1314768 <== 400K redo generation as normal
========== Option 5 :Parallel and nologging option =======================
SQL> create table tbl6 parallel unrecoverable as select * from dba_segments;
Table created.
Elapsed: 00:00:26.00 <== Still has improvement in total execution
SQL> select sid,name,value from 2 v$statname a,v$mystat b 3 where a.STATISTIC#=b.STATISTIC# 4 and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 1366836 <== (52K) - Redo log less than400K (Normal redo size) was generated
========== Other option =======================
Direct Insert:
SQL> create table tbl7 unrecoverable as select /*+ append */ * from dba_segments;
Table created.
Elapsed: 00:00:41.08 <== Not major improvement
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 1416912 <== (50K) - Redo logs less than normal was generated
SQL> create global temporary table tbl8 as select /*+ append */ * from dba_segments;
Table created.
Elapsed: 00:00:42.01 <== No improvement in total execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 1440580 <== (23K) - Minimum redo logs were generated
=====
Combination of parallelism and nologging isthe best choice with respect to performance.Make sure database is not in "force logging" mode.Tuning select statement before running CTS command is highly recommended.
TIP #39: SSO in Oracle application server
SSO Component :
- OC4J_Security
- HTTP server
Options for Bouncing SSO
- Bounce SSO component only
- Bounce SSO and OID
Flow of SSO
- User first time tries to access application
- There is no login cookie, therefore it is redirected to SSO server
- SSO Returns username/password page to user.
- SSO Verified username/password with OID
- If password is OK, SSO return a token to client with list of all application that user has access. This token is stored in client as a cookie.