Index does not store NULL value, consequently if you have null condition in SQL statement, related index is ignored by default.
create index scott.empidx_sal on scott.emp(sal)
select ename from scott.emp where sal is null;
---> Execution plan is : Full table scan.
Workaround :
- Create function-based index on nvl(sal,-1) and changed the query to select ename from scott.emp where nvl(sal,-1)=-1;
- I prefer this solution. Create composed index. In composed index, as far as the whole columns are not null, index does store null value.
create index scott.empidx_salename on scott.emp(ename,sal);
select ename from scott.emp where sal is null;
---> Execution plan is : Index scan
TIP #7 : Listener crash in Windows box.
I faced to a problem yesterday.Oracle listener crashed each time a connection was made.OS was WINXP SP2.
Event log showed the following error in SYSTEM log and APPS log.
Faulting application TNSLSNR.EXE, version 0.0.0.0, faulting module orantcp9.dll, version 0.0.0.0, fault address 0x00005732
The following items are proposed by metalink to resolve the issue :The following items are proposed by metalink to resolve the issue :
- Setting use_shared_socket=true in KHEY_LOCAL_MACHINE \software\oracle\home0
- Removing all 3rd party software for Internet download such as download accelarator.
- Changing TNSNAMES.ORA so that explicitly mention about ORACLE_HOME.
None of above solutions worked for me.In addition, I could not find anything in listener trace file or listener log which may lead me to the cause of the problem.
By the way, I checked out that no process was listening on the listener default port.Changing listener port did not change anything.
Finally I found the following magic command which fixed the issue after running and restarting the box.
netsh winsock reset catalog
Simple explanation :
Most of the Internet connectivity problems are because of the corruption in Winsock settings. Windows sockets settings may get corrupted due to the installation of a networking software, Malware infestation. In this situation user can connect to the Internet, but randomly may get Page cannot be displayed .Above command resets the Winsock catalog to the default configuration. This can be useful if a malformed LSP is installed that results in loss of network connectivity.
Event log showed the following error in SYSTEM log and APPS log.
Faulting application TNSLSNR.EXE, version 0.0.0.0, faulting module orantcp9.dll, version 0.0.0.0, fault address 0x00005732
The following items are proposed by metalink to resolve the issue :The following items are proposed by metalink to resolve the issue :
- Setting use_shared_socket=true in KHEY_LOCAL_MACHINE \software\oracle\home0
- Removing all 3rd party software for Internet download such as download accelarator.
- Changing TNSNAMES.ORA so that explicitly mention about ORACLE_HOME.
None of above solutions worked for me.In addition, I could not find anything in listener trace file or listener log which may lead me to the cause of the problem.
By the way, I checked out that no process was listening on the listener default port.Changing listener port did not change anything.
Finally I found the following magic command which fixed the issue after running and restarting the box.
netsh winsock reset catalog
Simple explanation :
Most of the Internet connectivity problems are because of the corruption in Winsock settings. Windows sockets settings may get corrupted due to the installation of a networking software, Malware infestation. In this situation user can connect to the Internet, but randomly may get Page cannot be displayed .Above command resets the Winsock catalog to the default configuration. This can be useful if a malformed LSP is installed that results in loss of network connectivity.
TIP #6: Stop jobs in 10g
Before 10g, if you want to stop all jobs, you can simply set JOB_QUEUE_PROCESSES to zero.
In 10g, Oracle intrduced scheduler_job with many advanced features.
In this post I do not want to deal with different features of scheduler_jobs.
In 10g, it is possible to schedule job either in dba_jobs or dba_scheduler_jobs.
Setting JOB_QUEUE_PROCESSES to zero stops only jobs in dba_jobs which means that jobs in scheduler are still running .
In order to disable jobs in dba_scheduler_job, the following script should be run for each job in scheduler.
BEGIN
sys.dbms_scheduler.disable( '"schema?"."job name?"' );
END;
FYI : For finding all jobs using 10g scheduler, run select owner, job_name, state from dba_scheduler_jobs;
In 10g, Oracle intrduced scheduler_job with many advanced features.
In this post I do not want to deal with different features of scheduler_jobs.
In 10g, it is possible to schedule job either in dba_jobs or dba_scheduler_jobs.
Setting JOB_QUEUE_PROCESSES to zero stops only jobs in dba_jobs which means that jobs in scheduler are still running .
In order to disable jobs in dba_scheduler_job, the following script should be run for each job in scheduler.
BEGIN
sys.dbms_scheduler.disable( '"schema?"."job name?"' );
END;
FYI : For finding all jobs using 10g scheduler, run select owner, job_name, state from dba_scheduler_jobs;
TIP #5 : Oracle account locked after 10g migration
I migrated 9i DBs to 10g.
After migration, clients complained that some of their accounts were locked.
In 10g, FAILED_LOGIN_ATTEMPTS in default profile was changed from UNLIMITED to 10.
This is the cause of problem.
I suggest to create new profile before migration and set all users with default profile to the new profile.
New default profile can be something like this :
CREATE PROFILE NEW_DEFAULT LIMIT FAILED_LOGIN_ATTEMPTS UNLIMITED;
select 'ALTER USER ' username || ' PROFILE NEW_DEFAULT;' from dba_users where profile='DEFAULT';
After migration, clients complained that some of their accounts were locked.
In 10g, FAILED_LOGIN_ATTEMPTS in default profile was changed from UNLIMITED to 10.
This is the cause of problem.
I suggest to create new profile before migration and set all users with default profile to the new profile.
New default profile can be something like this :
CREATE PROFILE NEW_DEFAULT LIMIT FAILED_LOGIN_ATTEMPTS UNLIMITED;
select 'ALTER USER ' username || ' PROFILE NEW_DEFAULT;' from dba_users where profile='DEFAULT';
TIP #4 : RMAN Recovery window
I `d like to clarify misunderstanding of RMAN recovery window.
There is incorrect thought :
Setting RMAN recovery window to 14 days ==> any backup older than 14 days become obsolete. Incorrect
Regarding to Oracle document, A recovery window is a period of time that begins with the current time and extends backward in time to the point of recoverability. In other words, Recovery window determines the earliest point of time when recovery is possible.Consequently, RMAN may need backups older than recovery window=n days to recover database to n days ago.
Let me provide you an example :
Assumptions :
- Recovery window 7 days.
- Scheduled DB backups. (Nov 1st,Nov 14th,Nov 21th,Nov 28th)
- current date : Nov 16th.
In this case for being recoverable between Nov 9th till Nov 16th (which is recovery window) , RMAN requires Nov 1st backup for recovery at any time between Nov 9th and Nov 14th. In addition, RMAN uses Nov 14th backup for recovery after that.
This shows better picture. Having recovery window of 7 days does not result to make Nov 1st backup as obsolete.
There is incorrect thought :
Setting RMAN recovery window to 14 days ==> any backup older than 14 days become obsolete. Incorrect
Regarding to Oracle document, A recovery window is a period of time that begins with the current time and extends backward in time to the point of recoverability. In other words, Recovery window determines the earliest point of time when recovery is possible.Consequently, RMAN may need backups older than recovery window=n days to recover database to n days ago.
Let me provide you an example :
Assumptions :
- Recovery window 7 days.
- Scheduled DB backups. (Nov 1st,Nov 14th,Nov 21th,Nov 28th)
- current date : Nov 16th.
Nov 1st -- Nov 14th ---Nov 21th-- Nov 28th ^ Nov9th ---- Nov 16th
In this case for being recoverable between Nov 9th till Nov 16th (which is recovery window) , RMAN requires Nov 1st backup for recovery at any time between Nov 9th and Nov 14th. In addition, RMAN uses Nov 14th backup for recovery after that.
This shows better picture. Having recovery window of 7 days does not result to make Nov 1st backup as obsolete.
TIP #3 : _PGA_MAX_SIZE hidden parameter.
_pga_max_size is hidden parameter which can have key role in proper PGA usage.
For serial operation, the size of PGA for each work area is limited to max 100MB when pga_aggregate_target>=2GB.
_pga_max_size controls max PGA which can be used by every workarea.
Oracle Default setting can be changed by assigning higher value to this hidden parameter. In addition, It would solve high temporary tablespace usage.
I highly recommend of setting this parameter for big database.
I did not encounter to any problem after setting this parameter in 9i and 10g.
For serial operation, the size of PGA for each work area is limited to max 100MB when pga_aggregate_target>=2GB.
_pga_max_size controls max PGA which can be used by every workarea.
Oracle Default setting can be changed by assigning higher value to this hidden parameter. In addition, It would solve high temporary tablespace usage.
I highly recommend of setting this parameter for big database.
I did not encounter to any problem after setting this parameter in 9i and 10g.
TIP #2 : TEMP Tablespace can not extend.
TEMP tablespace growth issue is known issue for DBAs.
I know that each of us experienced this issue so many time.
In this post, I `d like to outline this issue and explain options to fix.
Basically, this error reports in Alert log with the following statement:
ORA-01652: unable to extend temp segment by %s in tablespace %s).
A "temp segment" is not necessarily a SORT segment in a temporary tablespace. Actually it can be used in following conditions:
- In SORT statement
- In creating Index
- In enabling constraint
- In creating primary key/unique key constraint.
.
.
.
Conseuqently, it is not true to say TEMP segment is only used for SORT operations.
On the other hand the most probable usage of TEMP segment is sort usage in practical situation.
I suggest to follow below guideline to resolve the issue :
- Find out whether or not PGA size
(pga_aggregate target in >=9i or sort_area_size <9i ) has proper size.
Having bigger PGA will decrease TEMP usage.
- Make sure that TEMP datafile is autoextensible and maxbytes/maxblocks are not very low.
select * from dba_temp_files;
- Find queries which allocate big chunk of temp tablespace and try to tune these queries.
The following is shell script which can be scheduled via crontab.
The result has SQL statements which allocate TEMP tablespace
In database :
CREATE TABLE "AVAIL"."MONITOR_SORTSEG"
( "TABLESPACE_NAME" VARCHAR2(31),
"USED_EXTENTS" NUMBER,
"FREE_EXTENTS" NUMBER,
"ISSUE_TIME" DATE);
In OS : (monitor_temp.sh)
#!/bin/ksh
SCRIPT=1; export SCRIPT
. ~orfnprd/.profile
$ORACLE_HOME/bin/sqlplus -s "/ as sysdba" <<\EOF
set serveroutput on
begin
insert into monitor_sortseg select tablespace_name,USED_EXTENTS,FREE_EXTENTS,sysdate
from v\$sort_segment where upper(tablespace_name)='TEMP' and free_extents<=500;
if(SQL%ROWCOUNT<>0) then
dbms_output.put_line('Start -------- Start Date:'||sysdate);
dbms_output.put_line('TEMP free extent is lower than 50 .... Finding SQL statement');
insert into monitor_sortuse select USER,SESSION_NUM,sortu.TABLESPACE,
sysdate,sql_text,sorts from v\$sort_usage sortu,v\$sqlarea sqla where upper
(tablespace)='TEMP' and sortu.SQLHASH=sqla.HASH_VALUE ;
commit;
dbms_output.put_line('End ----------End Date:'||sysdate);
end if;
end;
/
exit;
EOF
Sample crontab :
0,5,10,15,20,25,30,40,45,50,55 * * * * sample_temp.sh > sample_temp.log 2>&1
I know that each of us experienced this issue so many time.
In this post, I `d like to outline this issue and explain options to fix.
Basically, this error reports in Alert log with the following statement:
ORA-01652: unable to extend temp segment by %s in tablespace %s).
A "temp segment" is not necessarily a SORT segment in a temporary tablespace. Actually it can be used in following conditions:
- In SORT statement
- In creating Index
- In enabling constraint
- In creating primary key/unique key constraint.
.
.
.
Conseuqently, it is not true to say TEMP segment is only used for SORT operations.
On the other hand the most probable usage of TEMP segment is sort usage in practical situation.
I suggest to follow below guideline to resolve the issue :
- Find out whether or not PGA size
(pga_aggregate target in >=9i or sort_area_size <9i ) has proper size.
Having bigger PGA will decrease TEMP usage.
- Make sure that TEMP datafile is autoextensible and maxbytes/maxblocks are not very low.
select * from dba_temp_files;
- Find queries which allocate big chunk of temp tablespace and try to tune these queries.
The following is shell script which can be scheduled via crontab.
The result has SQL statements which allocate TEMP tablespace
In database :
CREATE TABLE "AVAIL"."MONITOR_SORTSEG"
( "TABLESPACE_NAME" VARCHAR2(31),
"USED_EXTENTS" NUMBER,
"FREE_EXTENTS" NUMBER,
"ISSUE_TIME" DATE);
In OS : (monitor_temp.sh)
#!/bin/ksh
SCRIPT=1; export SCRIPT
. ~orfnprd/.profile
$ORACLE_HOME/bin/sqlplus -s "/ as sysdba" <<\EOF
set serveroutput on
begin
insert into monitor_sortseg select tablespace_name,USED_EXTENTS,FREE_EXTENTS,sysdate
from v\$sort_segment where upper(tablespace_name)='TEMP' and free_extents<=500;
if(SQL%ROWCOUNT<>0) then
dbms_output.put_line('Start -------- Start Date:'||sysdate);
dbms_output.put_line('TEMP free extent is lower than 50 .... Finding SQL statement');
insert into monitor_sortuse select USER,SESSION_NUM,sortu.TABLESPACE,
sysdate,sql_text,sorts from v\$sort_usage sortu,v\$sqlarea sqla where upper
(tablespace)='TEMP' and sortu.SQLHASH=sqla.HASH_VALUE ;
commit;
dbms_output.put_line('End ----------End Date:'||sysdate);
end if;
end;
/
exit;
EOF
Sample crontab :
0,5,10,15,20,25,30,40,45,50,55 * * * * sample_temp.sh > sample_temp.log 2>&1
TIP #1: How to minimize rollback segments generation in bulk delete ?
One of my client has monthly job which deletes many records. This job generates many rollback segments. The main idea of this post is to find out options to minimize rollback generation in bulk delete.
There are 2 options :
- Replace delete statement with truncate. Truncate deletes all records in table and does not generate any rollback segment. This solution may not be applicable in all cases. Actually this option is not applicable to my client because code should not be changed (Company rule!!!!).Besides, monthly job only deletes table partially.
- Run delete statement iteratively. The following shows sample code :
DECLARE
nCount number;
BEGIN
LOOP
delete from table? where rownum<1001' and condition?;
select count(ROWID) from table? INTO nCount;
commit; --- Cleans up rollback
EXIT WHEN nCount = 0;
END LOOP;
END;
With this option, rollback segments cleans up after each deletion of 1000 records.
Then, these rollback segments can be reused.
There are 2 options :
- Replace delete statement with truncate. Truncate deletes all records in table and does not generate any rollback segment. This solution may not be applicable in all cases. Actually this option is not applicable to my client because code should not be changed (Company rule!!!!).Besides, monthly job only deletes table partially.
- Run delete statement iteratively. The following shows sample code :
DECLARE
nCount number;
BEGIN
LOOP
delete from table? where rownum<1001' and condition?;
select count(ROWID) from table? INTO nCount;
commit; --- Cleans up rollback
EXIT WHEN nCount = 0;
END LOOP;
END;
With this option, rollback segments cleans up after each deletion of 1000 records.
Then, these rollback segments can be reused.
First post
Subscribe to:
Posts (Atom)