TIP # 66: IPC error after applying patch or relinking Oracle
Client could not startup database and got the following error.
SQL> startup nomount;
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:sendmsg failed with status: 59
ORA-27301: OS failure message: Message too long
ORA-27302: failure occurred at: sskgxpsnd1
SQL>
Oracle was not able to allocate memory, Alert log had nothing regarding to this issue.
After some research in metalink, I found metalink note (Note : 300956.1) which recommends to bump up the following parameters.
# no -o tcp_sendspace=262144
# no -o tcp_recvspace=262144
# no -o udp_sendspace=65536
# no -o udp_recvspace=262144
no -o rfc1323=1
Increasing those values did not fix the issue however, more details were shown.
This time after startup,the following message was shown.
SQL> startup nomount;
ORA-29702: error occurred in Cluster Group Service operation
Also alert log had more info :
Cluster communication is configured to use the following interface(s) for this instance
165.123.81.30
Fri Dec 7 10:35:10 2007
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
PMON started with pid=2, OS id=630922
DIAG started with pid=3, OS id=651416
PSP0 started with pid=4, OS id=360672
LMON started with pid=5, OS id=647294
LMD0 started with pid=6, OS id=163956
MMAN started with pid=7, OS id=581878
DBW0 started with pid=8, OS id=614592
LGWR started with pid=9, OS id=626764
CKPT started with pid=10, OS id=643140
SMON started with pid=11, OS id=499806
RECO started with pid=12, OS id=659522
CJQ0 started with pid=13, OS id=663620
MMON started with pid=14, OS id=569480
MMNL started with pid=15, OS id=671816
Fri Dec 7 10:35:12 2007
USER: terminating instance due to error 29702
Instance terminated by USER, pid = 618564
It seems that for some reasons after relinking Oracle, Oracle considered binary installation as RAC install.
As the result, I turned off RAC option with running the following command :
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk rac_off
$ make -f ins_rdbms.mk ioracle
This resolved the issue and I was able to startup database without any problem.
TIP #65 : 10g listener.
To answer, In Oracle 10g, listener is secure by itself and there is no need to set a password for listener as in older version to protect listener.
By default, listener uses local OS authentication which means that only the user who owned listener can admin it. This feature is enabled by default.
If you run lsnrctl status in 10g, you should see any line like the following in output:
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for IBM/AIX RISC System/6000: Version 10.2.0.2.0 - Production
Start Date
Uptime
Trace Level off
Security ON: Local OS Authentication
SNMP ON
Listener Parameter File
Listener Log File
In above example, If listener is started as Oracle user and user X attempts to admin it or Oracle user from a different node attempts to admin it, the following error will appear.
TNS-01190: The user is not authorized to execute the requested
On the other hand, if a password is set for 10g listener, all users who know the password can admin listener.For the password protected listener in 10g, the result of lsnrctl status would be something like this :
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for IBM/AIX RISC System/6000: Version 10.2.0.2.0 - Production
Start Date
Uptime
Trace Level off
Security ON: Password or Local OS Authentication
SNMP ON
Listener Parameter File
Listener Log File
To conclude, if only user who started listener is allowed to admin listener, you do not need to set password for 10g listener.Listener by itself is protected and the only user who can admin the listener is listener owner.To me, it seems to be more restricted.
However, if you want other users to admin listener, you still need to have password protected listener. All users who knows password can run admin command for listener.
To me, it seems less restricted.
TIP # 64 : How to determine bind variable value and type from trace file.
One of our reader asked a question on how to identify data type of bind variable, As the result I dedicated this post to answer.
For finding more information about bind variable, user should be able to locate BINDS keyword in trace file.
This part is something like this :
BINDS #2:
bind 0: dty=1 mxl=32(30) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=32 offset=0
bfp=030a79ec bln=32 avl=01 flg=05
value="A"
dty determines bind variable type and value shows bind variable value at the time of execution.
Different values can be assigned to dty which presents different data type.
Typical dty value is :
1 VARCHAR2 or NVARCHAR2
2 NUMBER
8 LONG
11 ROWID
12 DATE
23 RAW
24 LONG RAW
96 CHAR
112 CLOB or NCLOB
113 BLOB
114 BFILE
To demosnstrate, I ran some queries.
------ Query #1 : Number Bind variable
SQL> alter session set statistics_level=ALL;
Session altered.
SQL> alter session set max_dump_file_size=UNLIMITED;
Session altered.
SQL> alter session set events '10046 trace name context forever, level 12';
Session altered.
SQL> variable b number;
SQL> declare
2 cnt number;
3 begin
4 :b:=1;
5 select count(*) into cnt from dba_objects where object_id=:b;
6 end;
7 /
PL/SQL procedure successfully completed.
SQL> alter session set events '10046 trace name context off';
Session altered.
Checking trace file ......
PARSING IN CURSOR #2 len=52 dep=1 uid=0 oct=3 lid=0 tim=2797155817 hv=1220784193 ad='130a9d4c'
SELECT count(*) from dba_objects where object_id=:b1
END OF STMT
PARSE #2:c=15625,e=24270,p=0,cr=5,cu=0,mis=1,r=0,dep=1,og=0,tim=2797155809
BINDS #2:
bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=24 offset=0
bfp=030a7b2c bln=22 avl=02 flg=05
value=1
EXEC #2:c=0,e=2062,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=2797159524
FETCH #2:c=0,e=37,p=0,cr=3,cu=0,mis=0,r=1,dep=1,og=4,tim=2797159885
------ Query #2 : Varhchar2 Bind variable
SQL> alter session set events '10046 trace name context forever, level 12';
Session altered.
SQL> variable b varchar2(30);
SQL> declare
2 cnt number;
3 begin
4 :b:='A';
5 select count(*) into cnt from dba_objects where object_name=:b;
6 end;
7 /
PL/SQL procedure successfully completed.
SQL> alter session set events '10046 trace name context off';
Checking trace file .....
PARSING IN CURSOR #2 len=54 dep=1 uid=0 oct=3 lid=0 tim=3007778755 hv=2029951970 ad='12ff2c80'
SELECT count(*) from dba_objects where object_name=:b1
END OF STMT
PARSE #2:c=15625,e=12991,p=0,cr=2,cu=0,mis=1,r=0,dep=1,og=0,tim=3007778748
BINDS #2:
bind 0: dty=1 mxl=32(30) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=32 offset=0
bfp=030a79ec bln=32 avl=01 flg=05
value="A"
EXEC #2:c=0,e=2167,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=3007782524
WAIT #2: nam='db file sequential read' ela= 9458 p1=1 p2=89 p3=1
WAIT #2: nam='db file sequential read' ela= 6606 p1=1 p2=26791 p3=1
TIP # 63 : DIRECT Load and redo log generation
However, in this case does not see any difference.
I am trying to explain on how Direct load would help.
Some facts :
1.The undo would normally be used to un-insert the rows in the event of a
failure or rollback - with DIRECT LOAD, undo is not necessary since the new rows are added entirely above the high water mark for the table.
2. When "Direct load" (/*+ append */) is used, Oracle can skip undo generation for the TABLE data - but not on the indexes.
3. In archivelog mode , REDO is normally generated with "Direct load" (/*+ APPEND */), it is UNDO that is skipped and then only for the table itself.
4.If table is placed into "nologging" mode (or use nologging in insert
command), then redo for the table as well as undo can be skipped..Again, only for the table - not for any indexes on the table itself.
5. Small redos which are generated in nologgin/DIRECT LOAD is used to protect the data dictionary.
6. To prevent archivelog generation in "Direct load", database and tablespace should not be in "Force logging mode". (Check v$database and dba_tablespaces).
Considering above facts, my first answer to a client was: "Archives are generated because of indexes on tables".
To demonstrate, Please follow below simple steps :
========== Table without index in Direct load ===========
SQL> create table tbl2 as select * from dba_objects where 1=2;
Table created.
--- Checking redo and undo before direct load
select a.value,b.name from v$sesstat a,v$statname b where a.statistic#=b.statistic# and a.sid in (select sid from v$mystat) and name like '%redo size%';
VALUE NAME
---------- ----------
0 redo size
select sum(undoblks)*8192/1024/1024 UNDO-MB from v$undostat where trunc(begin_time,'DD')=trunc(sysdate,'DD');
UNDO-MB
----------------------------
19.78125
---- Direct load
insert /*+ append */ into tbl2 nologging select * from dba_objects;
select a.value,b.name from v$sesstat a,v$statname b where a.statistic#=b.statistic# and a.sid in (select sid from v$mystat) and name like '%redo size%';
VALUE NAME
---------- ----------
7068580 redo size
select sum(undoblks)*8192/1024/1024 UNDO-MB from v$undostat where trunc(begin_time,'DD')=trunc(sysdate,'DD');
UNDO-MB
----------------------------
25.78125
Generated redo : 7068580 bytes
Generated undo : 6 MB
========== Table with index in Direct load ===========
SQL> create table tbl1 as select * from dba_objects where 1=2;
Table created.
SQL> create index idx1 on tbl1(object_name);
Index created.
SQL> create index idx2 on tbl1(owner);
Index created.
---- redo log before direct load
select a.value,b.name from v$sesstat a,v$statname b where a.statistic#=b.statistic# and a.sid in (select sid from v$mystat) and name like '%redo size%';
VALUE NAME
---------- ----------
0 redo size
select sum(undoblks)*8192/1024/1024 UNDO-MB from v$undostat where trunc(begin_time,'DD')=trunc(sysdate,'DD');
UNDO-MB
----------
25.78125
------- Direct load
SQL> insert /*+ append */ into tbl1 nologging select * from dba_objects;
62908 rows created.
------ Redo log and Undo after direct load.
select a.value,b.name from v$sesstat a,v$statname b where a.statistic#=b.statistic# and a.sid in (select sid from v$mystat) and name like '%redo size%';
VALUE NAME
---------- ----------
28251320 redo size
select sum(undoblks)*8192/1024/1024 from v$undostat where trunc(begin_time,'DD')=trunc(sysdate,'DD');
UNDO-MB
----------------------------
42.5078125
Generated redo : 28251320 bytes
Generated undo : 17 MB
More archives and more UNDOs in direct load on table with indexes.
Sum up :
=========
To use direct load and to take advantage of less archivelog generation and better performance, always consider the followings :
1. Disable indexes during direct load.
2. Make sure to use both /*+ append */ with nologging at the same time.
3. Make sure database and tablespace are not in nologging mode.
oradbatips is 1 year old !
It was exactly last year which I decided to create this blog in order to share my little knowledge with others around the world.
Thanks to all of you to give me the courage with your comments, positive criticism, which is my main motivation on keep this blog updated.
My plan for upcoming years is to keep this blog up-to-date.
Also I am thinking to have a place in this blog where readers can ask any questions/problems and issues related to Oracle Administration.
As always, any comment is really appreciated.
TIP 62#: ORA-600 [LIBRARYCACHENOTEMPTYONCLOSE] DURING DB 10g SHUTDOWN (2)
Since setting this trigger for some clients, shutdown has worked properly.However there were some odd situations which I found the following message in Alert log.
ORA-00604: error occurred at recursive SQL level 1
ORA-12663: Services required by client not available on the server
ORA-36961: Oracle OLAP is not available.
ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
ORA-06512: at line 15
Above situation would be fixed by disabling SYS.OLAPISTARTUPTRIGGER and SYS.OLAPISHUTDOWNTRIGGER triggers.As the result, shutdown trigger can be something like this :
CREATE or replace TRIGGER flush_shared_pool
BEFORE SHUTDOWN ON DATABASE
BEGIN
execute immediate 'alter TRIGGER SYS.OLAPISTARTUPTRIGGER DISABLE';
execute immediate 'ALTER TRIGGER SYS.OLAPISHUTDOWNTRIGGER DISABLE';
execute immediate 'ALTER SYSTEM FLUSH SHARED_POOL';
execute immediate 'alter TRIGGER SYS.OLAPISTARTUPTRIGGER ENABLE';
execute immediate 'ALTER TRIGGER SYS.OLAPISHUTDOWNTRIGGER ENABLE';
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR (num => -20000, msg => 'Error flushing pool');
END;
Please let me know whether or not this piece of code resolves the issue.
P.S :Latest update from Oracle indicates that this bug will be fixed in 10.2.0.4.
TIP 61#: ORA-600 [LIBRARYCACHENOTEMPTYONCLOSE] DURING DB 10g SHUTDOWN
This is reported as a bug in metalink and Oracle has not released any fix so far. (Bug 4483084).
This error is kind of sever for some of my clients specially warehouse environment which mainly relied on cold backup.
These environments need clean shutdown.
Workaround is to implement 'BEFORE shutdown' trigger to flush shared pool.
Here is a sample of shutdown trigger which worked for me.
CREATE or replace TRIGGER flush_shared_pool
BEFORE SHUTDOWN ON DATABASE
BEGIN
execute immediate 'ALTER SYSTEM FLUSH SHARED_POOL';
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR (num => -20000, msg => 'Error flushing pool');
END;
I have not seen any problem since implementing this trigger.
Please let me know if that also works for you.
TIP 60#: Force Opatch to use non-default inventory
If inventory does not match with with Oracle home, the following errors may raise during opatch run :
LsInventorySession failed: OracleHomeInventory::load() gets null oracleHomeInfo
LsInventory: OPatch Exception while accessing O2O
OPATCH_JAVA_ERROR : An exception of type "OPatchException" has occurred:
OPatch Exception:
OUI found no such ORACLE_HOME set in the environment
Can not get details for given Oracle Home
An exception occurs
null
First of all check opatch output and look at "Location of Oracle Inventory Pointer" entry.
By default it is /etc/oraInst.loc. If this entry does not point to right location, you will have 2 options to fix the situation:
- copy valid oraInst.loc as /etc/oraInst.loc as root.
It is not feasible always since rarely DBA has root access on box.
- Run the following command to point opatch to correct inventory (Replace /etc/oraInst.loc.9i_version with your own inventory location).
opatch lsinventory -invPtrLoc /etc/oraInst.loc.9i_version
Here is a sample :
oracle@test(/home/oracle): opatch lsinventory
Invoking OPatch 10.2.0.2.3
Oracle interim Patch Installer version 10.2.0.2.3
Copyright (c) 2007, Oracle Corporation. All rights reserved..
Oracle Home : /s005/home/oracle/product/10.2.0.2
Central Inventory : /s006/home/oracle/infraAS10g/oraInventory
from : /etc/oraInst.loc
OPatch version : 10.2.0.2.3
OUI version : 10.2.0.2.0
OUI location : /s005/home/oracle/product/10.2.0.2/oui
Log file location : /s005/home/oracle/product/10.2.0.2/cfgtoollogs/opatch/opatch2007-10-18_16-16-19PM.log
List of Homes on this system:
Home name= OUIHome1, Location= "/s006/home/oracle/infraAS10g/product/10.1.0.4"
Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
Oracle Home dir. path does not exist in Central Inventory
Oracle Home is a symbolic link
Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory::load() gets null oracleHomeInfo
OPatch failed with error code 73
=============
oracle@test(/home/oracle):opatch lsinventory -invPtrLoc /etc/oraInst.loc.9i_version
Invoking OPatch 10.2.0.2.3
Oracle interim Patch Installer version 10.2.0.2.3
Copyright (c) 2007, Oracle Corporation. All rights reserved..
Oracle Home : /s005/home/oracle/product/10.2.0.2
Central Inventory : /s005/home/oracle/oraInventory
from : /etc/oraInst.loc.9i_version
OPatch version : 10.2.0.2.3
OUI version : 10.2.0.2.0
OUI location : /s005/home/oracle/product/10.2.0.2/oui
Log file location : /s005/home/oracle/product/10.2.0.2/cfgtoollogs/opatch/opatch2007-10-18_16-17-29PM.log
Lsinventory Output file location : /s005/home/oracle/product/10.2.0.2/cfgtoollogs/opatch/lsinv/lsinventory2007-10-18_16-17-29PM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (4):
Oracle Client 10.2.0.1.0
Oracle Database 10g 10.2.0.1.0
Oracle Database 10g Products 10.2.0.1.0
Oracle Database 10g Release 2 Patch Set 1 10.2.0.2.0
There are 4 products installed in this Oracle Home.
Interim patches (2) :
Patch 5075470 : applied on Sun Feb 18 10:58:23 EST 2007
Created on 6 Apr 2006, 03:38:28 hrs US/Pacific
Bugs fixed:
5075470
Patch 4689959 : applied on Sun Feb 18 10:33:33 EST 2007
Created on 22 Sep 2006, 04:57:06 hrs US/Pacific
Bugs fixed:
4689959
TIP 59#: CPU cost vs I/O cost in execution plan
In this post, I am trying to just make more sense of these three columns in an execution plan. Optimizer uses a CPU to I/O ratio to determine how much CPU cost would be equivalent to one I/O cost.
Here is guideline on how to find out this ratio and see if that make sense in your system.
1. Enable event 10053 .
alter session set max_dump_file_size = unlimited;
ALTER SESSION SET EVENTS '10053 trace name context forever,level 1';
EXPLAIN PLAN FOR --SQL STATEMENT--;
exit
2. Find trace file.
Trace file should be in user_dump_dest
3. Go through trace file and find access path for an index full scan or a table full scan
Here is a sample :
In above example :
resc_io is I/O cost ( 559.00 in this sample)
resc_cpu is CPU cost ( 27509992 in this sample)
cost is total cost ( 562.23 in this sample)
CPU to IO ratio can be calculated with the following formula :
cpu_to_io_ratio = resc_cpu / (cost - resc_io)
In this case CPU to IO ratio is 27509992/(562.23-559.00 ) = 8517025.39. This means that each I/O cost equals to 8517025.39 CPU cost.
In healthy system this factor should not be very low. Also the higher this ratio is, the more effective index will be to improve performance.
PS : Those who are interested, please Email me for full documentation on 10053 event and more sample on this subject.
TIP 58# : Column ordering in composite index
1. Whether SQL with condition on leading indexed column can only take advantage of composite index ?
Answer is no. SQL statement which has condition on columns which are not leading column in composite index may take advantage of using script.
Here is sample :
-- Create sample table
create table test as select * from user_objects;
-- Create composite index on tempory and object_id
create index test_idx on test(temporary,object_id);
-- Analyze new table with its index
exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'TEST',cascade=>TRUE, method_opt=>'FOR ALL INDEXED COLUMNS');
-- Check simple query which has condition on both indexed columns.
As I think, optimizer should use existing index.
SQL> select object_id, object_name from test where temporary='Y' and object_id=19;
no rows selected
Index has been used.
-- Check simple query which has condition on second column .
SQL> select object_id, object_name from test where object_id=245630;
no rows selected
SQL> select object_id, object_name from test where object_id=245630;
no rows selected
Index has been used.
Interestingly, query uses index which its leading indexed column is not object_id
That is the beauty of INDEX SKIP SCAN
More testing ..... Creating composite index with same columns but with reverse order. (This time on object_id and then temporary).
-- Create composite index on object_id and temporary
create index test_idx on test(object_id,temporary);
-- Gather stats
exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'TEST',cascade=>TRUE, method_opt=>'FOR ALL INDEXED COLUMNS');
-- Check simple query which has condition on both indexed columns.
SQL> select object_id, object_name from test where temporary='Y' and object_id=19;
no rows selected
Index has been used.
-- Check simple query which has condition on second column .
select object_id, object_name from test where TEMPORARY='Y';
Index has not been used.
optimizer does not choose composite index. It goes with full table scan
Conclusion :
---------------
In creating of composite index, order of columns is important.
If you put less selective columns as leading columns in index and put most selective columns at the end, other queries which have condition on non-leading index columns may also take advantage of INDEX SKIP SCAN.
For those who are interested, Send me Email for more test cases.
TIP#57 : How to use PL/SQL profiler
Here, I try to explain some practical notes when I use this profiler mostly for a client to tune code.
1. The following SQL statement shows the elapsed time of each piece of code and in addition to line number, also SQL text is shown which make finding bottleneck easier.
select p.unit_name, p.occured, p.tot_time, p.line# line,
substr(s.text, 1,75) text
from
(select u.unit_name, d.TOTAL_OCCUR occured,
(d.TOTAL_TIME/1000000000) tot_time, d.line#
from plsql_profiler_units u, plsql_profiler_data d
where d.RUNID=u.runid and d.UNIT_NUMBER = u.unit_number
and d.TOTAL_OCCUR >0
and u.runid= &runid) p,
user_source s
where p.unit_name = s.name(+) and p.line# = s.line (+)
order by p.unit_name, p.line#;
2. Anonymous block in result are parts of code which are not owned by the user who ran dbms_profiler.It is better to run profiler as user who owns PL/SQL code.
TIP #56 : Invalid username/password; logon denied
First of all, I was suspicious to the password of sys user.
Checking with client, sys password was right but remote login to DB was not possible.
I found that REMOTE_LOGIN_PASSWORDFILE is NONE in database which means priviliged users can not login remotely to database.
In order to give permission to sys use to access to database remotely, this parameter should not be NONE.
For doing that, the following steps is recommended.
- Shutdown database.
- Move database password file (In Unix on $ORACLE_HOME/dbs) and create a new password file.
orapwd file=
- Set REMOTE_LOGIN_PASSWORDFILE to EXCLUSIVE
- Startup database
Other possible reasons of this error are :
- The password file does not exist
- The password supplied does not match the one in the password file
- The password file been changed since the instance was started
TIP #55 : iasconsole and Report server in 9.0.4 Oracle application server
An error occurred while restarting "Reports Server: rep_name??".
An error occurred while talking to OPMN. Could not find entity for "rep_name???".
name ?? is name of report server.
Do not be worry. There is nothing wrong with report server.As Oracle confirmed in metalink note 299508.1, ias-console can not be used in 9.0.4 to bounce report server. (However start/stop button for report server in iasconsole is active !!!)Starting report server has to happen through opmn only.
TIP 54# : "Cannot open or remove a file" warning when applying patch
Error in writing to file $ORACLE_HOME/lib32/libnjssl10.so
Cannot open or remove a file containing a running program
Patch was running on AIX platform and as Oracle recommendation before starting patchset, I ran /usr/sbin/slibclean as root which is supposed to clean box from program which got file descriptor.
This experience showed me that slibclean is not always perfect and there may be some odd situations which other remedies are required.
Here is what I did to get over the situation.
- cp $ORACLE_HOME/lib32/libnjssl10.so $ORACLE_HOME/lib32/libnjssl10.so.org
- cp $ORACLE_HOME/lib32/libnjssl10.so.org $ORACLE_HOME/lib32/libnjssl10.so
Copy did not fix the situation
- mv $ORACLE_HOME/lib32/libnjssl10.so $ORACLE_HOME/lib32/libnjssl10.so.org
-mv $ORACLE_HOME/lib32/libnjssl10.so.org $ORACLE_HOME/lib32/libnjssl10.so
Move fixed the situation
As the result :
- On AIX , first try /usr/sbin/slibclean as root.
- On all platforms, if file exists try to move it to new file and then move it back to original.
TIP 53# : Exclude option in data pump on Window.
I just wanted to use this option and exclude a table from whole schema export per client `s request.A client is running 10g database on Windows.
Regarding to Oracle documentation and expdp help=y, it is supposed to be as simple as adding exclude=table:table_name.
However, I got the following errors when I ran it.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39071: Value for EXCLUDE_NAME_EXPR filter is badly formed.
ORA-00920: invalid relational operator
I tried option with the following syntax :
exclude=TABLE:"= 'TABLE_NAME'"
exclude=TABLE:TABLE_NAME
exclude=TABLE:" IN ('TABLE_NAME') "
No luck. Got the same error.
Finally, I figured out that window can not parse " and need to add escape seuqence.
So exclude option on windows should have syntax like this :
exclude=TABLE:\"='TABLE_NAME'\"
TIP 52# : Tablespace map
In this post, I wrote a PL/SQL script which shows mapping of free space and used space in tablespace.This script helped me out to resolve tablespace fragmentation issue for a client.Analyzing the result of script would end up to detecting offending objects and relocating them.
Script details
------------------
CREATE OR REPLACE PROCEDURE mapts (target_ts VARCHAR2,min_extents NUMBER DEFAULT 8) IS
cur_block_id NUMBER;
prev_block_id NUMBER;
cur_blocks NUMBER;
prev_blocks NUMBER;
map_str CLOB;
alloc_str CLOB;
tmp NUMBER;
file_id_v NUMBER;
CURSOR mycur IS select segment_name,blocks,block_id,EXTENT_ID from dba_extents where TABLESPACE_NAME=Upper(target_ts) AND file_id=file_id_v order by block_id,EXTENT_ID;
myvar mycur%ROWTYPE;
BEGIN
FOR rec IN (SELECT file_id FROM dba_data_files WHERE tablespace_name=Upper(target_ts)) LOOP
file_id_v:=rec.file_id;
cur_block_id:=1;
prev_block_id:=1;
prev_blocks:=0;
OPEN mycur;
LOOP
FETCH mycur INTO myvar;
EXIT WHEN mycur%NOTFOUND;
cur_block_id:=myvar.block_id;
cur_blocks:=myvar.blocks;
IF(prev_block_id+ prev_blocks= cur_block_id) THEN
FOR i IN 1..round(myvar.blocks/min_extents) LOOP
alloc_str:=alloc_str||'-'|| '+';
END LOOP;
tmp:= trunc(myvar.blocks/min_extents);
map_str:=map_str||'-'|| myvar.segment_name;
map_str:=map_str||'('||To_Char(tmp)||')';
prev_block_id:=cur_block_id;
prev_blocks:=cur_blocks;
ELSE
FOR i IN 1..Round((cur_block_id-prev_block_id-prev_blocks)/min_extents) LOOP
map_str:=map_str||'-'|| '*';
alloc_str:=alloc_str||'-'|| '*';
END LOOP;
FOR i IN 1..Round(myvar.blocks/min_extents) LOOP
alloc_str:=alloc_str||'-'|| '+';
END LOOP;
tmp:= trunc(myvar.blocks/8);
map_str:=map_str||'-'|| myvar.segment_name;
map_str:=map_str||'('||To_Char(tmp)||')';
prev_block_id:=cur_block_id;
prev_blocks:=cur_blocks;
END IF;
--Dbms_Output.put_line(map_str);
END LOOP;
INSERT INTO object_place_in_ts values(target_ts,file_id_v,alloc_str,map_str,sysdate);
COMMIT;
alloc_str:=NULL;
map_str:=NULL;
END LOOP;
CLOSE mycur;
END;
/
Script needs the following table to be existed in database.
Table : object_place_in_ts
Table description:
Name Null? Type
----------------------------------------
TABLESPACE_NAME VARCHAR2(30)
ALLOC_PATTERN CLOB
OBJ_PATTERN CLOB
CRE_TIME DATE
Alloc_pattern : String pattern which allocated extents are shown with + and free extents are shown with *
Obj_patten : String pattern which allocated extents have name of object and free extents are shown with *.
Example :
tablespace_name allocated_pattern object_pattern
------------------------------------------------------------------------------
users ++*+++ -obj1(2)-*-objx(1)-objy(1)-objx(1)
In this example, two first extents have been allocated, third extent is free and fourth and fifth and sixth extents have been allocated . (Check allocated_pattern).
First two extents have been allocated by obj1, fourth and sixth extents have been allocated by objx.fifth extent has been allocated by objy.
I found this result and analysis very handy to resolve tablespace fragmentation.
TIP 51# : Performance issue after upgrade to 10g.
I faced with a performance issue for a client right after upgrade from 9i to 10g.
We all know that optimizer in 10g is different animal which is supposed to work better and more inteligent.This is almost true but not always.
Some queries after upgrade from 9i to 10g may have performance issue which means they may run slower in 10g.
For figuring out whether or not performance issue is because of new enhanced features in 10g please turn off the following options and run query in 10g.
alter session set "_optimizer_cost_based_transformation" =off; (Disable subquery unnesting and view merging -- New 10g optimizer feature).
alter session set "_gby_hash_aggregation_enabled" = FALSE; (Disable Hash group by aggregation -- New 10g optimizer feature).
If disabling above options resolve the situation, the performance issue is definitely because of new features in 10g optimizer. Otherwise, optimizer would not be culprit.
TIP #50: Opatch failed after uncompleted/interrupted patch
Opatch failed with the following message :
Problems with the lock file
Lock file exists, details are:
Interim Patch is holding the lock from xxxxxxx,
probably due to previous unsuccessful operation
ERROR: OPatch failed during pre-reqs check
Opatch creates patch_locked in $ORACLE_HOME/.patch_storage during running of patch.If for some reason opatch run was interrupted or canceled , this file may not be removed.As the result, further opatch run can find this patch and will fail because of this file. Solution is to delete this file.
TIP # 49 : Recover database before resetlogs.
They called me and asked me to recover database to June 7th which was the time that all DBs were opened with RESETLOGS.
As I checked, I found that DBs were opened with resetlogs 4 times.
June 3rd
June 7th <==== Requested recovery time.
June 9th
June 10th
Recovery seems to be straight forward with RMAN.Just need to find right incarnation and reset database to this incarnation and recover database.
This was my first thought but it did not work.
Why ? Reason is very simple. Since RMAN had no backup in June 7th right after resrlogs, database could not be recovered to that SCN.Database should be recovered to SCN in June 7th which RMAN has backup for all datafiles.
Therefore, the scenario was changed to this way :
- Connect to recovery catalog.
- List incarnation; <== Find incarnation for June 7th, sassume 1234.
- RESET database to incarnation 1234; <=== Incarnation from previous step.
- List backup of database; <==== Find the max SCN in June 7th which all datafiles have been backed up. (For example 28234442198 )
- run {
allocate channel ch1 device type disk;
set UNTIL SCN 28234442198 ;
restore controlfile ;
sql 'alter database mount';
restore database;
recover database;
}
=========
Example :
RMAN> list incarnation;
List of Database Incarnations
DB Key Inc Key DB Name DB ID CUR Reset SCN Reset Time
------- ------- -------- ---------------- --- ---------- ----------
136421 136465 ALTP 2638891487 NO 190578 30-SEP-06
136421 136422 ALTP 2638891487 YES 28209020897 07-JUN-07
136421 141144 ALTP 2638891487 NO 28235483091 10-JUN-07
RESET DATABASE TO INCARNATION 136422;
List backup of database ;
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136888 Full 161M DISK 00:00:08 07-JUN-07
BP Key: 136903 Status: AVAILABLE Tag: TAG20070607T212319
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624662601_S1666_P1
List of Datafiles in backup set 136888
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
3 Full 28234451112 07-JUN-07 D:\ORADATA\YKTP\CWMLITE01.DBF
6 Full 28234451112 07-JUN-07 D:\ORADATA\YKTP\INDX01.DBF
13 Full 28234451112 07-JUN-07 D:\ORADATA\YKTP\CONVDATA.DBF
18 Full 28234451112 07-JUN-07 D:\ORADATA\YKTP\ININDX.DBF
28 Full 28234451112 07-JUN-07 D:\ORADATA\YKTP\SQINDX.DBF
29 Full 28234451112 07-JUN-07 D:\ORADATA\YKTP\SYDATA.DBF
30 Full 28234451112 07-JUN-07 D:\ORADATA\YKTP\SYINDX.DBF
38 Full 28234451112 07-JUN-07 D:\ORADATA\YKTP\WOINDX.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136889 Full 348M DISK 00:00:10 07-JUN-07
BP Key: 136904 Status: AVAILABLE Tag: TAG20070607T212319
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624662616_S1667_P1
List of Datafiles in backup set 136889
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
4 Full 28234451193 07-JUN-07 D:\ORADATA\YKTP\DRSYS01.DBF
5 Full 28234451193 07-JUN-07 D:\ORADATA\YKTP\EXAMPLE01.DBF
11 Full 28234451193 07-JUN-07 D:\ORADATA\YKTP\APPDDATA.DBF
14 Full 28234451193 07-JUN-07 D:\ORADATA\YKTP\CUSTDATA.DBF
19 Full 28234451193 07-JUN-07 D:\ORADATA\YKTP\INVCDATA.DBF
24 Full 28234451193 07-JUN-07 D:\ORADATA\YKTP\RFQDATA.DBF
35 Full 28234451193 07-JUN-07 D:\ORADATA\YKTP\T_DATA.DBF
37 Full 28234451193 07-JUN-07 D:\ORADATA\YKTP\WODATA.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136890 Full 200M DISK 00:00:09 07-JUN-07
BP Key: 136905 Status: AVAILABLE Tag: TAG20070607T212319
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624662631_S1668_P1
List of Datafiles in backup set 136890
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
7 Full 28234451214 07-JUN-07 D:\ORADATA\YKTP\ODM01.DBF
9 Full 28234451214 07-JUN-07 D:\ORADATA\YKTP\USERS01.DBF
10 Full 28234451214 07-JUN-07 D:\ORADATA\YKTP\XDB01.DBF
16 Full 28234451214 07-JUN-07 D:\ORADATA\YKTP\EQDATA.DBF
17 Full 28234451214 07-JUN-07 D:\ORADATA\YKTP\EQINDX.DBF
23 Full 28234451214 07-JUN-07 D:\ORADATA\YKTP\PRCHDATA.DBF
41 Full 28234451214 07-JUN-07 C:\ORACLE\ORADATA\YKTP\ARCHDATA.DBF
42 Full 28234451214 07-JUN-07 C:\ORACLE\ORADATA\YKTP\ARCHINDX.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136891 Full 365M DISK 00:00:11 07-JUN-07
BP Key: 136906 Status: AVAILABLE Tag: TAG20070607T212319
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624662646_S1669_P1
List of Datafiles in backup set 136891
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 28234451320 07-JUN-07 D:\ORADATA\YKTP\SYSTEM01.DBF
8 Full 28234451320 07-JUN-07 D:\ORADATA\YKTP\TOOLS01.DBF
12 Full 28234451320 07-JUN-07 D:\ORADATA\YKTP\APPDINDX.DBF
15 Full 28234451320 07-JUN-07 D:\ORADATA\YKTP\CUSTINDX.DBF
20 Full 28234451320 07-JUN-07 D:\ORADATA\YKTP\INVCINDX.DBF
22 Full 28234451320 07-JUN-07 D:\ORADATA\YKTP\MSPROJ.DBF
26 Full 28234451320 07-JUN-07 D:\ORADATA\YKTP\BIGROLL1.DBF
32 Full 28234451320 07-JUN-07 D:\ORADATA\YKTP\YKTPTOOL.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136892 Full 330M DISK 00:00:11 07-JUN-07
BP Key: 136907 Status: AVAILABLE Tag: TAG20070607T212319
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624662662_S1670_P1
List of Datafiles in backup set 136892
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
21 Full 28234451389 07-JUN-07 D:\ORADATA\YKTP\IN_ITEM_XREF.DBF
25 Full 28234451389 07-JUN-07 D:\ORADATA\YKTP\RFQINDX.DBF
27 Full 28234451389 07-JUN-07 D:\ORADATA\YKTP\SQDATA.DBF
31 Full 28234451389 07-JUN-07 D:\ORADATA\YKTP\INDATA.ORA
33 Full 28234451389 07-JUN-07 D:\ORADATA\YKTP\TPDATA.DBF
34 Full 28234451389 07-JUN-07 D:\ORADATA\YKTP\TPINDX.DBF
36 Full 28234451389 07-JUN-07 D:\ORADATA\YKTP\YKTPUSER.DBF
40 Full 28234451389 07-JUN-07 D:\ORADATA\YKTP\PRCHINDX.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
136893 Full 1000M DISK 00:00:40 07-JUN-07
BP Key: 136908 Status: AVAILABLE Tag: TAG20070607T212319
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624662677_S1671_P1
List of Datafiles in backup set 136893
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
2 Full 28234451491 07-JUN-07 E:\ORADATA\YKTP\UNDOTBS01.DBF
39 Full 28234451491 07-JUN-07 C:\ORACLE\ORADATA\YKTP\COPYROL1.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
141796 Full 161M DISK 00:00:08 11-JUN-07
BP Key: 141802 Status: AVAILABLE Tag: TAG20070611T184529
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624998729_S1681_P1
List of Datafiles in backup set 141796
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
3 Full 28239402207 11-JUN-07 D:\ORADATA\YKTP\CWMLITE01.DBF
6 Full 28239402207 11-JUN-07 D:\ORADATA\YKTP\INDX01.DBF
13 Full 28239402207 11-JUN-07 D:\ORADATA\YKTP\CONVDATA.DBF
18 Full 28239402207 11-JUN-07 D:\ORADATA\YKTP\ININDX.DBF
28 Full 28239402207 11-JUN-07 D:\ORADATA\YKTP\SQINDX.DBF
29 Full 28239402207 11-JUN-07 D:\ORADATA\YKTP\SYDATA.DBF
30 Full 28239402207 11-JUN-07 D:\ORADATA\YKTP\SYINDX.DBF
38 Full 28239402207 11-JUN-07 D:\ORADATA\YKTP\WOINDX.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
141797 Full 349M DISK 00:00:09 11-JUN-07
BP Key: 141803 Status: AVAILABLE Tag: TAG20070611T184529
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624998745_S1682_P1
List of Datafiles in backup set 141797
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
4 Full 28239402216 11-JUN-07 D:\ORADATA\YKTP\DRSYS01.DBF
5 Full 28239402216 11-JUN-07 D:\ORADATA\YKTP\EXAMPLE01.DBF
11 Full 28239402216 11-JUN-07 D:\ORADATA\YKTP\APPDDATA.DBF
14 Full 28239402216 11-JUN-07 D:\ORADATA\YKTP\CUSTDATA.DBF
19 Full 28239402216 11-JUN-07 D:\ORADATA\YKTP\INVCDATA.DBF
24 Full 28239402216 11-JUN-07 D:\ORADATA\YKTP\RFQDATA.DBF
35 Full 28239402216 11-JUN-07 D:\ORADATA\YKTP\T_DATA.DBF
37 Full 28239402216 11-JUN-07 D:\ORADATA\YKTP\WODATA.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
141798 Full 201M DISK 00:00:08 11-JUN-07
BP Key: 141804 Status: AVAILABLE Tag: TAG20070611T184529
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624998760_S1683_P1
List of Datafiles in backup set 141798
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
7 Full 28239402297 11-JUN-07 D:\ORADATA\YKTP\ODM01.DBF
9 Full 28239402297 11-JUN-07 D:\ORADATA\YKTP\USERS01.DBF
10 Full 28239402297 11-JUN-07 D:\ORADATA\YKTP\XDB01.DBF
16 Full 28239402297 11-JUN-07 D:\ORADATA\YKTP\EQDATA.DBF
17 Full 28239402297 11-JUN-07 D:\ORADATA\YKTP\EQINDX.DBF
23 Full 28239402297 11-JUN-07 D:\ORADATA\YKTP\PRCHDATA.DBF
41 Full 28239402297 11-JUN-07 C:\ORACLE\ORADATA\YKTP\ARCHDATA.DBF
42 Full 28239402297 11-JUN-07 C:\ORACLE\ORADATA\YKTP\ARCHINDX.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
141799 Full 365M DISK 00:00:10 11-JUN-07
BP Key: 141805 Status: AVAILABLE Tag: TAG20070611T184529
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624998775_S1684_P1
List of Datafiles in backup set 141799
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 28239402306 11-JUN-07 D:\ORADATA\YKTP\SYSTEM01.DBF
8 Full 28239402306 11-JUN-07 D:\ORADATA\YKTP\TOOLS01.DBF
12 Full 28239402306 11-JUN-07 D:\ORADATA\YKTP\APPDINDX.DBF
15 Full 28239402306 11-JUN-07 D:\ORADATA\YKTP\CUSTINDX.DBF
20 Full 28239402306 11-JUN-07 D:\ORADATA\YKTP\INVCINDX.DBF
22 Full 28239402306 11-JUN-07 D:\ORADATA\YKTP\MSPROJ.DBF
26 Full 28239402306 11-JUN-07 D:\ORADATA\YKTP\BIGROLL1.DBF
32 Full 28239402306 11-JUN-07 D:\ORADATA\YKTP\YKTPTOOL.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
141800 Full 330M DISK 00:00:10 11-JUN-07
BP Key: 141806 Status: AVAILABLE Tag: TAG20070611T184529
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624998790_S1685_P1
List of Datafiles in backup set 141800
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
21 Full 28239402384 11-JUN-07 D:\ORADATA\YKTP\IN_ITEM_XREF.DBF
25 Full 28239402384 11-JUN-07 D:\ORADATA\YKTP\RFQINDX.DBF
27 Full 28239402384 11-JUN-07 D:\ORADATA\YKTP\SQDATA.DBF
31 Full 28239402384 11-JUN-07 D:\ORADATA\YKTP\INDATA.ORA
33 Full 28239402384 11-JUN-07 D:\ORADATA\YKTP\TPDATA.DBF
34 Full 28239402384 11-JUN-07 D:\ORADATA\YKTP\TPINDX.DBF
36 Full 28239402384 11-JUN-07 D:\ORADATA\YKTP\YKTPUSER.DBF
40 Full 28239402384 11-JUN-07 D:\ORADATA\YKTP\PRCHINDX.DBF
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
141801 Full 1000M DISK 00:00:37 11-JUN-07
BP Key: 141807 Status: AVAILABLE Tag: TAG20070611T184529
Piece Name: E:\BACKUP\YKTP\RMAN\DB_BACKUP_YKTP_T624998806_S1686_P1
List of Datafiles in backup set 141801
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
2 Full 28239402393 11-JUN-07 E:\ORADATA\YKTP\UNDOTBS01.DBF
39 Full 28239402393 11-JUN-07 C:\ORACLE\ORADATA\YKTP\COPYROL1.DBF
After finding proper SCN, run the following to recover database before resetlogs.
run {
allocate channel ch1 device type disk;
set UNTIL SCN 28234451320 ;
restore controlfile ;
sql 'alter database mount';
restore database;
recover database;
}
TIP # 48 : Reading 10046 trace.
The followings is a guideline about the meaning of different section in 10046 trace file.
----------------------------------------------------------------------------
APPNAME mod='%s' mh=%lu act='%s' ah=%lu
---------------------------------------------------------------------------- APPNAME Application name setting. This only applies to Oracle 7.2
and above. This can be set by using the DBMS_APPLICATION_INFO
package. See Note 30366.1.
mod Module name.
mh Module hash value.
act Action.
ah Action hash value.
----------------------------------------------------------------------------
PARSING IN CURSOR #
END OF STMT
----------------------------------------------------------------------------
len Length of SQL statement.
dep Recursive depth of the cursor.
uid Schema user id of parsing user.
oct Oracle command type.
lid Privilege user id.
tim Timestamp.
Pre-Oracle9i, the times recorded by Oracle only have a resolution
of 1/100th of a second (10mS). As of Oracle9i some times are
available to microsecond accuracy (1/1,000,000th of a second).
The timestamp can be used to determine times between points
in the trace file.
The value is the value in V$TIMER when the line was written.
If there are TIMESTAMPS in the file you can use the difference
between 'tim' values to determine an absolute time.
hv Hash id.
ad SQLTEXT address (see
----------------------------------------------------------------------------
PARSE ERROR #%d:len=%ld dep=%d uid=%ld oct=%d lid=%ld tim=%lu err=%d
----------------------------------------------------------------------------
PARSE ERROR In Oracle 7.2+ parse errors are reported to the trace file.
len Length of SQL statement.
dep Recursive depth of the statement
uid User id.
oct Oracle command type (if known).
lid Privilege user id.
tim Timestamp.
err Oracle error code (e.g. ORA-XXXXX) reported
the statement is truncated as indicated by '...' at the end.
----------------------------------------------------------------------------
PARSE #
EXEC #
FETCH #
UNMAP #
----------------------------------------------------------------------------
PARSE Parse a statement.
EXEC Execute a pre-parsed statement.
FETCH Fetch rows from a cursor.
UNMAP If the cursor uses a temporary table, when the cursor is
closed you see an UNMAP when we free up the temporary table
locks.(Ie: free the lock, delete the state object, free the
temp segment)
In tkprof, UNMAP stats get added to the EXECUTE statistics.
SORT UNMAP
As above, but for OS file sorts or TEMP table segments.
c CPU time (100th's of a second in Oracle7 ,8 and 9).
e Elapsed time (100th's of a second Oracle7, 8
Microseconds in Oracle 9 onwards).
p Number of physical reads.
cr Number of buffers retrieved for CR reads.
cu Number of buffers retrieved in current mode.
mis Cursor missed in the cache.
r Number of rows processed.
dep Recursive call depth (0 = user SQL, >0 = recursive).
og Optimizer goal: 1=All_Rows, 2=First_Rows, 3=Rule, 4=Choose
tim Timestamp (large number in 100ths of a second). Use this to
determine the time between any 2 operations.
----------------------------------------------------------------------------
ERROR #%d:err=%d tim=%lu
---------------------------------------------------------------------------- SQL Error shown after an execution or fetch error.
err Oracle error code (e.g. ORA-XXXXX) at the top of the stack.
tim Timestamp.
----------------------------------------------------------------------------
STAT #
----------------------------------------------------------------------------
STAT Lines report explain plan statistics for the numbered
id Line of the explain plan which the row count applies to (starts
at line 1). This is effectively the row source row count
for all row sources in the execution tree.
cnt Number of rows for this row source.
As of 7.3.3 the items in '[...]' are also reported:
pid Parent id of this row source.
pos Position in explain plan.
obj Object id of row source (if this is a base object).
op='...' The row source access operation.
These let you know the 'run time' explain plan.
----------------------------------------------------------------------------
XCTEND rlbk=%d rd_only=%d
---------------------------------------------------------------------------- XCTEND A transaction end marker.
rlbk 1 if a rollback was performed, 0 if no rollback (commit).
rd_only 1 if transaction was read only, 0 if changes occurred.
----------------------------------------------------------------------------
======================================================================
The items below are only output if WAITS or BINDS are being traced.
These can be enabled with the DBMS_SUPPORT package.
======================================================================
----------------------------------------------------------------------------
BINDS #%d:
bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=03 oacfl2=0 size=24 offset=0
bfp=02fedb44 bln=22 avl=00 flg=05
value=10
----------------------------------------------------------------------------
BIND Variables bound to a cursor.
bind N The bind position being bound.
dty Data type (see
mxl Maximum length of the bind variable (private max len in paren).
mal Array length.
scl Scale.
pre Precision.
oacflg Special flag indicating bind options
oacflg2 Continuation of oacflg
size Amount of memory to be allocated for this chunk
offset Offset into this chunk for this bind buffer
bfp Bind address.
bln Bind buffer length.
avl Actual value length (array length too).
flg Special flag indicating bind status
value The actual value of the bind variable.
Numbers show the numeric value, strings show the string etc...
It is also possible to see "bind 6: (No oacdef for this bind)", if no
separate bind buffer exists.
----------------------------------------------------------------------------
WAIT #
----------------------------------------------------------------------------
WAIT An event that we waited for.
nam What was being waited for .
The wait events here are the same as are seen in
wait events and the values in P1, P2 and P3 below can be seen
in
ela Elapsed time for the operation.
p1 P1 for the given wait event.
p2 P2 for the given wait event.
p3 P3 for the given wait event.
Example (Full Table Scan):
WAIT #1: nam="db file scattered read" ela= 5 p1=4 p2=1435 p3=25
WAITing under CURSOR no 1
for "db file scattered read"
We waited 0.05 seconds
For a read of: File 4, start block 1435, for 25 Oracle blocks
Example (Index Scan):
WAIT #1: nam="db file sequential read" ela= 4 p1=4 p2=1224 p3=1
WAITing under CURSOR no 1
for "db file sequential read"
We waited 0.04 seconds for a single block read (p3=1)
from file 4, block 1224
TIP # 47 :Consideration in granting SELECT to public
After digging up, I found out that these privileges are because of select privileges which have been granted to PUBLIC user. The lesson is that any privilege which is granted to public, is granted to all users.As the result, in order to prevent security hole, granting privileges to PUBLIC should be restricted.
Above all, a known bug or it is better to say a big security hole may exists in DBs which select privilege has been granted to PUBLIC. In Database without latest CPU patch, users in database can run any DML commands on tables which only select provilege has been granted to public . (Security Hole !!!) .
The following shows the sample :
create user test identified by test;
create user test1 identified by test1;
grant resource,connect to test;
grant connect to test1;
--------- Create a table in test user and grant select to publi
cconnect test/test
create table test_sec (id number primary key,id2 number);
grant select on test_sec to public;
--------- Connect to second user .
connect test1/test1
desc test.test_sec
Name Null? Type
---------------------------
ID NOT NULL NUMBER
ID2 NUMBER
select * from test.test_sec; <===== user can select because select was granted to user
insert into test.test_sec values(1,1);
ORA-01031: insufficient privileges <======= User can not insert because insert was not granted
===========Bug ============== test1 can insert into test.test_sec if using the following syntax :
insert into (select a.id,a.id2 from (select * from test.test_sec)a inner join (select * from test.test_sec )b on (a.id=b.id) ) values (1,2);
1 row inserted. <========== User can insert while it has not any privilege to do that
select * from test.test_sec;
ID ID2
------------------
1 2
Bottom line is that avoid granting privileges to public users or have a monitoring of granted privileges to public users to prevent from unexpected permission.
TIP #46 : Using 'Plus archivelog' in backup
I regularly get flash recovery full problem with retention policy 2 weeks.Suprisingly database is only 1GB.
Bacjup script was as simple as 'backup database ...... plus archivelog ';
Finally, I found that problem was because of plus archivelog.
Regarding to RMAN document, RMAN performs these steps when "plus archivelog" command is in backup script.
- Runs an ALTER SYSTEM ARCHIVE LOG CURRENT statement.
- Runs the BACKUP ARCHIVELOG ALL command. Note that if backupoptimization is enabled, then RMAN only backs up logs that have not yetbeen backed up.
- Backs up the files specified in the BACKUP command.
- Runs an ALTER SYSTEM ARCHIVE LOG CURRENT statement.
- Backs up any remaining archived redo logs.
If backup otimization is off (default RMAN), each time all archivelogs are backed up which means that on daily basis backup size grows since archivelogs in flash recovery are not removed as space exists.
In a nutshell, when "plus archivelog" is used in backup script , make sure that optimization is on in RMAN configuration otherwise in each backup whole archive logs are backed up.
TIP #45 : Performance tuning PL/SQL with profiler.
This post shows how to use it.
For using dbms_profiler, first it should be installed.
Install dbms_profile and populate tables.
- @?/rdbms/admin/profload.sql (Install dbms_profiler)
- @?/rdbms/admin/proftab.sql (Populate profiler tables).
How to trace :
- exec dbms_profiler.start_profiler('Sample1');
- Run PL/SQL code (it would be better to be proc/func/package instead of anonymous code.
- exec dbms_profiler.stop_profiler;
- exec DBMS_PROFILER.FLUSH_DATA;
How to see the result :
- Which runid is for current trace
SELECT runid,
run_date,
run_comment,
run_total_time
FROM plsql_profiler_runs
ORDER BY runid;
- Total elapsed time of each PL/SQL code.
SELECT u.runid,
u.unit_number,
u.unit_type,
u.unit_owner,
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
d.min_time,
d.max_time
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid AND u.unit_number = d.unit_number
WHERE u.runid = &1
ORDER BY u.unit_number, d.line#;
FYI : Also putting dbms_utility.get_time(); in PL/SQL code would show exact run time execution of each PL/SQL pieces.
TIP # 44 : Performance tips for PL/SQL code.
- In condition put least expensive condition first. (It stops when evalutes it)
- Minimize implicit conversion. Try not to use mix of datatype . (using number and pls_integer)
- Use pls_integer when you can. (Best performance)
- Use binary_float or binary_double for floatingpoint
- Mostly allocate varchar2 of size >4000, because Oracle waits for allocating it in memory at runtime (Saving memory).However for varchar2<4000,>
- Put relevant subprogram into a package, because calling a function/procedure of it will load whole code in memory which can be referenced later.
- Pin frequent used pacage to prevent it from aging out.(dbms_shared)
- If using out parameter, it adds some performance overhead to make sure that in the case of any unhandled exception out parameter keep its original value after back to original program. If it is not important, user out ncopy to reduce overhead. (true for funtion with big out collection or LOB out).
- If you are running SQL statements inside a PL/SQL loop, FORALL statement as a way to replace loops of INSERT, UPDATE, and DELETE statements.
- If you are looping through the result set of a query, BULK COLLECT clause of the SELECT INTO statement as a way to bring the entire result set into memory in a single operation.
- Prevent unnecessary function call
for item in (select distinct(sqlrt(empno)) from emp) ===> change to ===>
for item in (select sqlrt(empno) from (select distinct empno from emp))
Example :
=== Poor performance
begin
For myvar in (select empno from emp) loop
if (myvar.empno>1000) then
delete from emp2 where empno = myvar.empno;
end if;
end loop;
end;
=== Good performance
declare
type mytype is table of number index by pls_integer;
myvar mytype;
i pls_integer:=1;
begin
select empno bulk collect into myvar from scott.emp where empno>1000;
forall i in myvar.first..myvar.last delete from scott.emp2 where empno=myvar(i);
end;
==== Forall with exception %BULK_ROWCOUNT(i), SQL%BULK_EXCEPTIONS.COUNT <==== Only works with forall
declare
type mytype is table of number(10) index by pls_integer;
myvar mytype;
i pls_integer:=1;
errors number;
begin
select empno bulk collect into myvar from scott.emp where empno>1000;
forall i in myvar.first..myvar.last save exceptions delete from scott.emp2 where empno=myvar(i);
exception WHEN others THEN -- Now we figure out what failed and why.
errors := SQL%BULK_EXCEPTIONS.COUNT;
DBMS_OUTPUT.PUT_LINE('Number of statements that failed: ' errors);
FOR i IN 1..errors LOOP
DBMS_OUTPUT.PUT_LINE('Error #' i ' occurred during ''iteration #' SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
DBMS_OUTPUT.PUT_LINE('Error message is ' SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
end;
TIP #43: Effective DOP.
However the big question is if query runs with determined DOP (Degree Of Parallelism) or Oracle downgrades it to lower DOP because of low in resources.
Doug Burns has a fully analysis in his blog. (http://oracledoug.com/serendipity/index.php?/archives/1231-How-can-I-tell-the-actual-DOP-used-for-my-Parallel-Query.html).
From reading Doug `s blog and different articles , I think that the best way to determine whether or not a session is using determined DOP is to run the following query.
select n.name, s.value
from v$sesstat s, v$statname n
where
s.statistic# = n.statistic#
and
n.name like 'Parallel%'
and sid=&sid_val;
If Parallel operations not downgraded <>0 , it means that session is running with determined DOP. However if Parallel operations downgraded to serial <>0 , it means that query is not running in parallel.
Parallel operations downgraded 75 to 99 pct , Parallel operations downgraded 50 to 75 pct, Parallel operations downgraded 25 to 50 pct and Parallel operations downgraded 1 to 25 pct determines if DOP is downgraded to lower value and in what percentage.
TIP #42: Package compilation takes long time
When I checking the compilation session, It was waiting for 'Library cache lock ' and 'Library cache pin'.
Reason : This package was security major package which is used by most sessions.The compiliation did hang because other sessions were executing procedure/function of this package.
Found metalink note 122793.1 very useful which basically recommends two options.
Option 1. Found blocking sessions with enabling trace.
- Run hanging compilation session and find out its process id (PID)
- Login with new session and enable tracing with
ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME SYSTEMSTATE LEVEL 10';
- Check user_dump_dest for trace , find process which match with PID of hanging session (Assume PID of hanging session is 8). Find handle address and find other PID with the same handle.In the following example PID 8 is hanging session and PID 9 is blocking session. With having PID=9, more info about blocking session can be found.
PROCESS 8:
----------------------------------------
.
.
.
.
waiting for 'library cache lock' blocking sess=0x0 seq=253 wait_time=0
handle address=5023ef9c, lock address=5019cad4, 10*mode+namespace=15
PROCESS 9:
----------------------------------------
.
.
.
LIBRARY OBJECT PIN: pin=5019d5e4 handle=5023ef9c mode=X lock=0
user=5005fad4 session=5005fad4 count=1 mask=0511 savepoint=118218 flags=[00]
2. Find blocking sessions with running a query
Run the following query. Basically X$KGLLK has all library cache lock and KGLLKREQ > 0 means that lock was requested by a session but session did not get it .
This query lists all blocking sessions which blocked hanging session.
SELECT SID,USERNAME,TERMINAL,PROGRAM FROM V$SESSION
WHERE SADDR in
(SELECT KGLLKSES FROM X$KGLLK LOCK_A
WHERE KGLLKREQ = 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKED SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ > 0)
);
TIP #41: Estimate backup size (2)
I ran the following steps in 10.2.0.2 on AIX5 64bit. It seems that RMAN behavior in 10.2.0.2 is closer to reality however it is not 100% precise to database size .
To sum up I would say that Before 10.2.0.2 , backup size can not be the same size as dba_segments (RMAN only does not backup never touched blocks), while in 10.2.0.2 backup size of full database is roughly the same as database.
Sample test in 10gR2. (Same scenario as 10.1).
=== Create new tablespace
SQL> create tablespace rmantst datafile '/o022/test/rmantst.dbf' size 200M;
Tablespace created.
-- Taking RMAN backup from empty tablespace
run{
backup tablespace rmantst format '/o022/test/backup_after_ts_creation_%s.bak';
}
Starting backup at 15-APR-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=533 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00028 name=/o022/test/rmantst.dbf
channel ORA_DISK_1: starting piece 1 at 15-APR-07
channel ORA_DISK_1: finished piece 1 at 15-APR-07
piece handle=/o022/test/backup_after_ts_creation_4.bak tag=TAG20070415T112434 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 15-APR-07
Starting Control File and SPFILE Autobackup at 15-APR-07
piece handle=/o022/backup/control_back_c-1721358348-20070415-01 comment=NONE
Finished Control File and SPFILE Autobackup at 15-APR-07
-- Check backup size
oracle(/o022/test):ls -l
total 410064
-rw-r----- 1 oracle dba 209723392 Apr 15 11:24 rmantst.dbf
-rw-r----- 1 oracle dba 106496 Apr 15 11:24 backup_after_ts_creation_4.bak (Backup of empty tablespace is only 106KB).
---- Create some objects in rmantst tablespace
create table rmantest2 tablespace rmantst as select * from dba_objects;
create table rmantest3 tablespace rmantst as select * from dba_objects;
create table rmantest4 tablespace rmantst as select * from dba_objects;
create table rmantest5 tablespace rmantst as select * from dba_objects union all select * from dba_objects union all select * from dba_objects union all select * from dba_objects;
create table rmantest6 tablespace rmantst as select * from dba_objects union all select * from dba_objects union all select * from dba_objects union all select * from dba_objects;
create table rmantest7 tablespace rmantst as select * from dba_objects union all select * from dba_objects union all select * from dba_objects union all select * from dba_objects;
SQL> select sum(bytes)/1024/1024 from dba_data_files where tablespace_name='RMANTST';
SUM(BYTES)/1024/1024
--------------------
200
SQL> select sum(bytes)/1024/1024 from dba_segments where tablespace_name='RMANTST';
SUM(BYTES)/1024/1024
--------------------
112 <== 112MB of 200MB was allocated.
--- Take tablespace backup
RMAN> run
2> {
backup tablespace rmantst format '/o022/test/backup_after_tbl_creation_%s.bak';
}
3> 4>
Starting backup at 15-APR-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=488 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00028 name=/o022/test/rmantst.dbf
channel ORA_DISK_1: starting piece 1 at 15-APR-07
channel ORA_DISK_1: finished piece 1 at 15-APR-07
piece handle=/o022/test/backup_after_tbl_creation_6.bak tag=TAG20070415T113143 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 15-APR-07
Starting Control File and SPFILE Autobackup at 15-APR-07
piece handle=/o022/backup/control_back_c-1721358348-20070415-02 comment=NONE
Finished Control File and SPFILE Autobackup at 15-APR-07
RMAN> exit
Recovery Manager complete.
oracle(/o022/test):ls -l
total 634320
-rw-r----- 1 oracle dba 114786304 Apr 15 11:31 backup_after_tbl_creation_6.bak (Backup size is ~ 114M)
-rw-r----- 1 oracle dba 106496 Apr 15 11:24 backup_after_ts_creation_4.bak
-rw-r----- 1 oracle dba 209723392 Apr 15 11:31 rmantst.dbf
-- Remove data from all objects in tablespace.
Table truncated.
SQL> truncate table rmantest1;
Table truncated.
SQL> truncate table rmantest3;
Table truncated.
SQL> truncate table rmantest4;
Table truncated.
SQL> truncate table rmantest5 ;
Table truncated.
SQL> truncate table rmantest6;
Table truncated.
SQL> truncate table rmantest7;
Table truncated.
SQL> select sum(bytes)/1024/1024 from dba_data_files where tablespace_name='RMANTST';
SUM(BYTES)/1024/1024
--------------------
200
SQL> select sum(bytes)/1024/1024 from dba_segments where tablespace_name='RMANTST';
SUM(BYTES)/1024/1024
--------------------
.4375 <== only 400K was allocated.
-- Take RMAN backup
run
{
backup tablespace rmantst format '/o022/test/backup_after_tbl_truncate_%s.bak';
}
oracle(/o022/test):ls -ltr
total 641792
-rw-r----- 1 oracle dba 106496 Apr 15 11:24 backup_after_ts_creation_4.bak
-rw-r----- 1 oracle dba 114786304 Apr 15 11:31 backup_after_tbl_creation_6.bak
-rw-r----- 1 oracle dba 209723392 Apr 15 11:35 rmantst.dbf
-rw-r----- 1 oracle dba 3825664 Apr 15 11:35 backup_after_tbl_truncate_8.bak ( Backup after truncate is only 3MB. It is not exactly the same as dba_segments but it is not as bad as 10.2.0.1).
TIP #40 : Different options for creating table from select
My goal was to create table as quickest as possible.
Here are different optins for create table as select statement
- Simple create as select (CTS) with no option
- CTS with nologging option
- CTS with unrecoverable option
- CTS with parallel option
- CTS with parallel and nologging option
Nologging or unrecoverable option create redo log as minimum as possible. (Database should be in no force logging mode).
The following shows practical example of above option.
========== Option 1 : Simple CTS =======================
SQL> create table tbl1 as select * from dba_segments;
Table created.
Elapsed: 00:00:43.04
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 405476 <== 405K redo was generated
========== Option 2 :Nologging option =======================
SQL> create table tbl2 nologging as select * from dba_segments;
Table created.
Elapsed: 00:00:42.06 <== Not major improvement in SQL execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 436128 <== Only 30K redo logs were generated. (Major improvement in redo generation)
========== Option 3 :Unrecoverable option =======================
SQL> create table tbl3 unrecoverable as select * from dba_segments;
Table created.
Elapsed: 00:00:45.00 <== Not major improvement in SQL execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 466688 <== Only 30K redo logs were generated. (Major improvement in redo generation)
========== Option 4 :Parallel option =======================
SQL> create table tbl4 parallel (degree 4) as select * from dba_segments;
Table created.
Elapsed: 00:00:14.09 <== Major improvement in SQL execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 894304 <== 400K redo generation as normal
SQL> create table tbl5 parallel as select * from dba_segments;
Table created.
Elapsed: 00:00:16.03 <== Major improvement in SQL execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 1314768 <== 400K redo generation as normal
========== Option 5 :Parallel and nologging option =======================
SQL> create table tbl6 parallel unrecoverable as select * from dba_segments;
Table created.
Elapsed: 00:00:26.00 <== Still has improvement in total execution
SQL> select sid,name,value from 2 v$statname a,v$mystat b 3 where a.STATISTIC#=b.STATISTIC# 4 and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 1366836 <== (52K) - Redo log less than400K (Normal redo size) was generated
========== Other option =======================
Direct Insert:
SQL> create table tbl7 unrecoverable as select /*+ append */ * from dba_segments;
Table created.
Elapsed: 00:00:41.08 <== Not major improvement
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 1416912 <== (50K) - Redo logs less than normal was generated
SQL> create global temporary table tbl8 as select /*+ append */ * from dba_segments;
Table created.
Elapsed: 00:00:42.01 <== No improvement in total execution
SQL> select sid,name,value from v$statname a,v$mystat b where a.STATISTIC#=b.STATISTIC# and a.name like '%redo size%';
SID NAME VALUE
---------- -------------------- ----------
20 redo size 1440580 <== (23K) - Minimum redo logs were generated
=====
Combination of parallelism and nologging isthe best choice with respect to performance.Make sure database is not in "force logging" mode.Tuning select statement before running CTS command is highly recommended.
TIP #39: SSO in Oracle application server
SSO Component :
- OC4J_Security
- HTTP server
Options for Bouncing SSO
- Bounce SSO component only
- Bounce SSO and OID
Flow of SSO
- User first time tries to access application
- There is no login cookie, therefore it is redirected to SSO server
- SSO Returns username/password page to user.
- SSO Verified username/password with OID
- If password is OK, SSO return a token to client with list of all application that user has access. This token is stored in client as a cookie.