Search

Sunday, June 19, 2016

ORA-00600: error in Queue Table SYS.SYS$SERVICE_METRICS_TAB

I found below ORA-600 in my RAC production database of version 11.2.0.4+PSU7. From the Oracle documents, I did the following and floating of ORA-600 is stopped completly.

Verify the following:

- alert_PROD1.log
- PROD1_q002_7930002.trc


"alert_PROD1.log" shows an ORA-00600 [kwqitnmphe:ltbagi] here:
---
Fri Mar 25 13:43:49 2016
Errors in file /u01/app/oracle/diag/rdbms/prod/PROD1/trace/PROD1_q000_10682516.trc (incident=40305):
ORA-00600: internal error code, arguments: [kwqitnmphe:ltbagi], [1], [0], [], [], [], [], [], [], [], [], []
Incident details in: /u01/app/oracle/diag/rdbms/prod/PROD1/incident/incdir_40305/PROD1_q000_10682516_i40305.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
..............................
..............................


Verify the "PROD1_q000_10682516.trc" and "PROD1_q000_10682516_i40305.trc" files.


Trace file /u01/app/oracle/diag/rdbms/prod/PROD1/trace/PROD1_q000_10682516.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/11.2.0/db_home
System name: AIX
Node name: rac01
Release: 1
Version: 7
Machine: 00F9D98B4C00
Instance name: PROD1
Redo thread mounted by this instance: 1
Oracle process number: 38
Unix process pid: 10682516, image: oracle@rac01 (Q000)


*** 2016-03-25 13:43:49.018
*** SESSION ID:(579.19) 2016-03-25 13:43:49.018
*** CLIENT ID:() 2016-03-25 13:43:49.018
*** SERVICE NAME:(SYS$BACKGROUND) 2016-03-25 13:43:49.018
*** MODULE NAME:(Streams) 2016-03-25 13:43:49.018
*** ACTION NAME:(QMON Slave) 2016-03-25 13:43:49.018

Incident 40305 created, dump file: /u01/app/oracle/diag/rdbms/prod/PROD1/incident/incdir_40305/PROD1_q000_10682516_i40305.trc
ORA-00600: internal error code, arguments: [kwqitnmphe:ltbagi], [1], [0], [], [], [], [], [], [], [], [], []

kwqitmmit during kwqitmmit:time mgr IOT proc: Error 600 in Queue Table SYS.SYS$SERVICE_METRICS_TAB
kwqicaclcur: Error 600
Cursor Session Number : 580
Cursor Session Serial : 3
Cursor Pin Number : 24585
kwqitmmit during kwqitmmit:end proc: Error 600 in Queue Table SYS.SYS$SERVICE_METRICS_TAB
kwqitptm: error 600 encountered during serving 13593

*** 2016-03-25 13:43:51.148
ORA-00600: internal error code, arguments: [kwqitnmphe:ltbagi], [1], [0], [], [], [], [], [], [], [], [], []
OPIRIP: Uncaught error 447. Error stack:
ORA-00447: fatal error in background process
ORA-00600: internal error code, arguments: [kwqitnmphe:ltbagi], [1], [0], [], [], [], [], [], [], [], [], []


Action plan:

1) If you are upgrading :

Drop the queue table -

exec dbms_aqadm.drop_queue_table( queue_table=>'SYS.SYS$SERVICE_METRICS_TAB', force => TRUE)

select object_name, object_type from dba_objects where object_name like '%SYS$SERVICE_%'; ----> no rows should be returned.

Follow the below steps to recreate them.

SQL>STARTUP UPGRADE

SQL>@$ORACLE_HOME/rdbms/admin/catproc.sql

SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql

SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql

select object_name, object_type,status from dba_objects where object_name like '%SYS$SERVICE_%';

If the script fails to create the queue, please let us know so that we may recreate it manually.



2) If it is comming in production, then take downtime and do the following:

 Drop the queue table -

exec dbms_aqadm.drop_queue_table( queue_table=>'SYS.SYS$SERVICE_METRICS_TAB', force => TRUE)

select object_name, object_type from dba_objects where object_name like '%SYS$SERVICE_%'; ----> no rows should be returned.

Follow the below steps to recreate them.

SQL>STARTUP

SQL>alter system enable restricted session;

SQL>@$ORACLE_HOME/rdbms/admin/catproc.sql

SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql

SQL>@$ORACLE_HOME/rdbms/admin/utlrp.sql

SQL>alter system disable restricted session;



ORA-01157: cannot identify/lock data file 201 - see DBWR trace file :Fix

Fix: ORA-01157: cannot identify/lock data file 201 - see DBWR trace file

ORA-01157: cannot identify/lock data file 201 - see DBWR trace file
ORA-01110: data file 201: '+DATA'

Issue:

If you are using XML in your database and you are trying to update any XML data, then above error may come.

Fix:

1) Add below line in your parameter file and bounce the database if not there.


*.dispatchers='(PROTOCOL=TCP) (SERVICE=PRODXDB)'

2) Unlock the XDB user if locked.

SQL> alter user XDB account unlock;


3) If you have created a duplicate database or open standby database r/w mode, then source TEMP tablespaces will no more valid. drop your TEMP tablespace and create new TEMP tablespace and make it default.

e.g.,
SQL> DROP TABLESPACE TEMP  INCLUDING CONTENTS AND DATAFILES;

SQL> create temporary tablespace TEMP02 tempfile '/u02/PROD/ORADATA/temp_02.dbf' size 1024M autoextend on;

SQL> alter database default temporary tablespace TEMP02;

Bounce the database. Now sure the above error will not come.

ORA-12162: TNS:net service name is incorrectly specified

ORA-12162: TNS:net service name is incorrectly specified

I have received a call from junior DBA that while I am connecting to newly installed database software and connecting as "/ as sysdba", above error is comming.

The Error:

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Jul 29 14:30:36 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-12162: TNS:net service name is incorrectly specified


Enter user-name:

...
Solution:
This is a very small and simple issue.The error message ORA-12162 "TNS:net service name is incorrectly specified", is very misleading.

It suggests that there is a problem with the tnsnames.ora file contents, but in reality the message ORA-12162 "TNS:net service name is incorrectly specified" results from improperly setting your ORACLE_SID value.

To fix this error in Windows, set your $ORACLE_HOME:

c:> set ORACLE_SID=PROD

In Linux, these commands sets ORACLE_HOME and $ORACLE_SID as follows:

export ORACLE_HOME=/u01/app/product/11.2.0/dbhome_1;

export ORACLE_SID=PROD;
export PATH=$ORACLE_HOME/bin:$PATH


ORA-07445 issues for SYS_AUTO_SQL_TUNING_TASK job

ORA-07445 for SYS_AUTO_SQL_TUNING_TASK

Automatic SQL Tuning in Oracle Database 11g

  • As part of Automatic SQL Tuning, Oracle 11g automatically runs the SQL Tuning Advisor against high impact SQL statements during maintenance windows. This process involves the following steps:
  • AWR statistics are used to compile an ordered list of the SQL statements with the greatest performance impact on the system, where the impact is the sum of the CPU and I/O times for the statement during the past week. The list excludes statements that are inherently less tunable, such as recently (within a month) tuned recursive statements, parallel queries, DML, DDL and SQL statements whose performance problems are caused by concurrency issues.
  • The SQL tuning advisor is run against each statement in turn. The outcome may include both SQL profiles and other recommendations.
  • Suggested SQL profiles are performance tested, and those that result in at least a threefold improvement are accepted if the ACCEPT_SQL_PROFILES parameter is set to TRUE, or reported if it is set to FALSE.The accepted SQL profiles are optionally implemented . Several factors many prevent SQL profiles from being implemented automatically, including stale optimizer statistics of dependent objects. The TYPE column of the DBA_SQL_PROFILES view indicates if SQL profiles are created manually (MANUAL) or automatically (AUTO-TUNE).
Oracle Database 11g Release 2 (11.2.0.2 onward) has some minor changes to the Automatic SQL Tuning feature introduced on Oracle 11g Release 1. In the previous release, reports and amendments to the automatic tuning task parameters was performed using the DBMS_SQLTUNE package. From 11.2.0.2 onward, this should be done using the DBMS_AUTO_SQLTUNE package, which requires the DBA role.

Note. The top-level enabling and disabling of the admin task is still done using the DBMS_AUTO_TASK_ADMIN package,


Issue:

If you are facing issues with "auto tuning task" issue, then it will lead more core dump generation issues.


Below sample errors may be found from alert log.
-----------------------------------------------

< Wed Mar 11 22:00:02 2015
< Begin automatic SQL Tuning Advisor run for special tuning task  "SYS_AUTO_SQL_TUNING_TASK"
< Wed Mar 11 22:00:35 2015
< Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x20] [PC:0x47EDC6F, __intel_new_memcpy()+3343] [flags: 0x0, count: 1]
< Errors in file /u02/app/oracle/diag/rdbms/PROD/PROD/trace/PROD_j000_53026.trc  (incident=36860):
< ORA-07445: exception encountered: core dump [__intel_new_memcpy()+3343] [SIGSEGV] [ADDR:0x20] [PC:0x47EDC6F] [Address not mapped to object] []
< Incident details in: /u02/app/oracle/diag/rdbms/PROD/PROD/incident/incdir_36860/PROD_j000_53026_i36860.trc
< Use ADRCI or Support Workbench to package the incident.
< See Note 411.1 at My Oracle Support for error and packaging details.
< Wed Mar 11 22:00:43 2015
< Dumping diagnostic data in directory=[cdmp_20150311220043], requested by (instance=1, osid=53026 (J000)), summary=[incident=36860].
< Wed Mar 11 22:00:44 2015
< Sweep [inc][36860]: completed
< Sweep [inc2][36860]: completed
< Wed Mar 11 22:01:11 2015
< Thread 1 cannot allocate new log, sequence 2199
< Private strand flush not complete
<   Current log# 3 seq# 2198 mem# 0: /u02/PROD/oradata/redo03.log
< Thread 1 advanced to log sequence 2199 (LGWR switch)
<   Current log# 4 seq# 2199 mem# 0: /u02/PROD/oradata/redo04.log
< Wed Mar 11 22:01:16 2015
< Archived Log entry 3026 added for thread 1 sequence 2198 ID 0x81a6dea4 dest 1:


Action:

Disable auto tuning task advisor.

To disable:

BEGIN
  DBMS_AUTO_TASK_ADMIN.disable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => NULL);
END;
/


If required to enable:

BEGIN
  DBMS_AUTO_TASK_ADMIN.enable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => NULL);
END;
/

To verify:

SELECT parameter_name, parameter_value
FROM   dba_advisor_parameters
WHERE  task_name = 'SYS_AUTO_SQL_TUNING_TASK'
AND    parameter_name = 'ACCEPT_SQL_PROFILES';

output:

PARAMETER_NAME                 PARAMETER_VALUE
------------------------------       --------------------------------
ACCEPT_SQL_PROFILES            FALSE


Resolving ORA-02374, 12899, 02372 issues during data-migration with different characterset

Resolving ORA-02374, ORA-12899, ORA-02372 issues during data-migration with different database character set

-- Error in import log

ORA-02374: conversion error loading table "crm"."transaction"
ORA-12899: value too large for column DETAILS_NOTE (actual: 4024, maximum: 4000)
ORA-02372: data for row: DETAILS_NOTE: 0X'434552564943414C205350494E450D0A0D0A4D756C7469706C' 

Issue Description:

I have transaction data in table "crm.transaction" where my database Characterset is "WE8MSWIN1252". In this table "DETAILS_NOTE" column datatype is varchar2 where some of the column values fully occupied with 4000 characters. When I am trying to import using IMPDP with another characterset database "AL32UTF8" then above error is coming.


Root cause:

RCA: When data are coming from source character-set to target new character set, special charters taking 3 char space, so it is showing as "ORA-12899: value too large for column RESULT (actual: 4024, maximum: 4000)".

Solution :

Target Character-set : AL32UTF8

Method-1:

  • Migrate your schema or database.
  • Create dblink with souuce to target database.
  • Disable constraints.
  • alter the table as example shown:
             
SQL> alter table crm.transaction modify DETAILS_NOTE  VARCHAR2(4000 CHAR);
  • Now insert the whole data .


Method-2:

  • Migrate your schema or database.
  • Disable constraints.
  • alter the table as example shown:  
SQL> alter table crm.transaction modify DETAILS_NOTE  VARCHAR2(4000 CHAR);
  • import the table from logical dump with below format.

impdp full=N directory=data_pump dumpfile=PROD_full_%U.dmp logfile=PROD_trx_imp.log TABLE_EXISTS_ACTION=TRUNCATE schemas=CRM TABLES=CRM.transaction REMAP_TABLE=HR.transaction:transaction

See the DDL of the Table after alter:

create table crm.transaction
(
  trnxid      NUMBER not null,
  rsltrpthdrid    NUMBER not null,
  paramdetailid    NUMBER not null,
  details_note     VARCHAR2(4000 CHAR),
  unitid      NUMBER,
) tablespace CRM_TBLS;



Compressed piece HARD block error/ Validating backups / Troubleshoot ORA-19870

Verification of Compressed piece HARD block error in RMAN
Validating backups in RMAN
Troubleshoot ORA-19870

Problem Description :
When I tried in one of my standby database to restore database, I received the below error.

Error from RMAN log:

$ rman target /

RMAN> RUN
{
allocate channel ch1 type disk;
RESTORE DATABASE;
SWITCH DATAFILE ALL;
}2> 3> 4> 5> 6> 

using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=273 device type=DISK

Starting restore at 23-JAN-15

channel ch1: starting datafile backup set restore
channel ch1: specifying datafile(s) to restore from backup set
channel ch1: restoring datafile 00003 to /u01/ORADATA/PROD/undotbs01.dbf
channel ch1: restoring datafile 00004 to /u01/ORADATA/PROD/users01.dbf
...
...
channel ch1: reading from backup piece 

/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp
channel ch1: ORA-19870: error while restoring backup piece 

/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp
ORA-19599: block number 167207 is corrupt in backup piece 

/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp

channel ch1: starting datafile backup set restore
channel ch1: specifying datafile(s) to restore from backup set
channel ch1: restoring datafile 00001 to /u01/ORADATA/PROD/system01.dbf
channel ch1: restoring datafile 00002 to /u01/ORADATA/PROD/sysaux01.dbf
...
...
channel ch1: reading from backup piece 

/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0gpk9j_.bkp
channel ch1: ORA-19870: error while restoring backup piece 

/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0gpk9j_.bkp
ORA-19599: block number 204403 is corrupt in backup piece 

/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0gpk9j_.bkp

failover to previous backup

released channel: ch1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 01/23/2015 14:30:27
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 62 found to restore
RMAN-06023: no backup or copy of datafile 58 found to restore
RMAN-06023: no backup or copy of datafile 33 found to restore
RMAN-06023: no backup or copy of datafile 14 found to restore
RMAN-06023: no backup or copy of datafile 13 found to restore

RMAN> 

Error From Alert log:


Fri Jan 23 14:22:55 2015 
Full restore complete of datafile 35 /u01/ORADATA/PROD/HRM_INDEX01.dbf. Elapsed time: 0:04:44 
checkpoint is 101068670522 
last deallocation scn is 98368972366 
Fri Jan 23 14:23:11 2015 
Corrupt block 167207 found during reading backup piece, 

file=/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp, 

corr_type=3 
Reread of blocknum=167207, 
..... 
..... 
file=/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp, 

found same corrupt data 
Reread of blocknum=167207, 

file=/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp, 

found same corrupt data 
Compressed piece HARD block error, bit 8 off in block number 00000001 for file 0 
Fri Jan 23 14:25:12 2015 
.... 
.... 


Investigation:

A) Primary database:
Step: 1 - Is there any block corruption in the database?
Action:
SQL> select * from v$database_block_corruption; 

no rows selected 

Step:2 - Is there any failure backups?

RMAN> list failure; 

using target database control file instead of recovery catalog 
no failures found that match specification 

Step:3 - Was it created with the compression option set?

If No, no issues, If yes, then we will see the backup transfer process.

Step:4 - Are all required backup pices validated?

RMAN> list backuppiece 

'/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp';

using target database control file instead of recovery catalog

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
1227    1227    1   1   AVAILABLE   DISK        

/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp

-- Take the BP Key and run the below command.

RMAN> validate backupset 1308;   -- Just verified with taking a BP key

Starting validate at 23-JAN-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=807 device type=DISK
channel ORA_DISK_1: starting validation of archived log backup set
channel ORA_DISK_1: reading from backup piece 

/u01/app/oracle/fast_recovery_area/PROD/backupset/2015_01_27/o1_mf_annnn_TAG20150127T053329_bdfog

l4f_.bkp
channel ORA_DISK_1: piece 

handle=/u01/app/oracle/fast_recovery_area/PROD/backupset/2015_01_27/o1_mf_annnn_TAG20150127T05332

9_bdfogl4f_.bkp tag=TAG20150127T053329
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
Finished validate at 23-JAN-15

RMAN>

-- It is successfully validated. i.e., there is no issue with backup piece with production.

Note :

You can use the VALIDATE option of the BACKUP command to verify that database files exist and are in the correct locations, and have no physical or logical corruptions that would prevent RMAN from creating backups of them. When performing a BACKUP... VALIDATE, RMAN reads the files to be backed up in their entirety, as it would during a real backup. It does not, however, actually produce any backup sets or image copies.

If the backup validation discovers corrupt blocks, then RMAN updates the $DATABASE_BLOCK_CORRUPTION view with rows describing the corruptions. You can repair corruptions using block media recovery, documented in Oracle Database Backup and Recovery Advanced User's Guide. After a corrupt block is repaired, the row identifying this block is deleted from the view.

For example, you can validate that all database files and archived logs can be backed up by running a command as follows:

RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL;

The RMAN client displays the same output that it would if it were really backing up the files. If RMAN cannot validate the backup of one or more of the files, then it issues an error message. For example, RMAN may show output similar to the following:

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 08/29/2002 14:33:47
ORA-19625: error identifying file /oracle/oradata/trgt/arch/archive1_6.dbf
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3


Now verification in standby database:

1) How the backup pieces are transferred?
If SCP, then may be a block corruption in backup piece. Here the rman log and alert log is showing this type of error.  So to irradicate, tranfer the backups and archivelogs using ftp binary mode.

2) Have you validated backups in standby after catalog all backup pieces?
If no. Do it. If yes, then see the status. If it is showing like below, then there is corrupt block in backup piece.

RMAN> list backuppiece '/u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp'; 

using target database control file instead of recovery catalog 

List of Backup Pieces 
BP Key BS Key Pc# Cp# Status Device Type Piece Name 
------- ------- --- --- ----------- ----------- ---------- 
1227 1227 1 1 AVAILABLE DISK /u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp 

RMAN> validate backupset 1227; 

Starting validate at 23-JAN-15 
allocated channel: ORA_DISK_1 
channel ORA_DISK_1: SID=273 device type=DISK 
channel ORA_DISK_1: starting validation of datafile backup set 
channel ORA_DISK_1: reading from backup piece /u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp 
RMAN-00571: =========================================================== 
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== 
RMAN-00571: =========================================================== 
RMAN-03009: failure of validate command on ORA_DISK_1 channel at 01/23/2015 20:42:34 
ORA-19870: error while restoring backup piece /u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp 
ORA-19599: block number 167207 is corrupt in backup piece /u02/bkp/prodbkp/backupset/backupset/2015_01_22/o1_mf_nnndf_TAG20150122T050216_bd0fr199_.bkp 

RMAN>

-- It seems block corruption in backup piece.

Others:
Verification of FTP services:
-- Verify service status , All services

# service --status-all

-- Specific services

# service vsftpd restart
# service vsftpd status
# service vsftpd stop

Example:
# service vsftpd status
vsftpd is stopped

# service vsftpd start
Starting vsftpd for vsftpd:                                [  OK  ]

# service vsftpd status
vsftpd (pid 32674) is running...


Troubleshoot : ORA-02020: too many database links in use

Resolve "ORA-02020: too many database links in use" error 

One of my database is used as remote database for local / distributed databases. For one requirement we have created 'n' number of db links. So many queries fired to use the dblink and at that time we found below error:

"ORA-02020: too many database links in use "

Cause:  The current session has exceeded the INIT.ORA open_links maximum.

To find:
---------
SQL> show parameter open_links;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
open_links                           integer         4
open_links_per_instance      integer         4


Action: 
-------
Increase the open_links limit, or free up some open links by committing or rolling back the transaction and canceling open cursors that reference remote databases.

To Change:
----------
SQL> alter system set open_links_per_instance=10 scope=spfile;
SQL> alter system set open_links=10 scope=spfile;

and then, bounce the database.

Note:
> The default value is set to 4.
> If it is set to 0, distributed transactions are not allowed.
> If you are expecting your transactions to have a maximum of 3 database links open concurrently, set this parameter to 3 or higher.
> Do not set it too high, it is better for the application to close database links when no longer in use than to change the parameter to a high number.
If you are not sure how many database links are opened up concurrently by your session's database application, you can query v$dblink.

SQL> select in_transaction, count(*) from v$dblink
     group by in_transaction;

IN_     COUNT(*)
---       ----------
YES          1
Extra Coverage:

Close a db link
To explicitly close the database link , use the command below:

SQL> alter session close database link remotedb;  -- remotedb --> a dblink name
Session altered.

OR
SQL> exec DBMS_SESSION.CLOSE_DATABASE_LINK (dblink_name);

Know your open links

Once you have created and made use of a database link in Oracle,
you might want to keep an eye on the number of concurrent open database links
in your database so you can tune the open_links initialization parameter.

Read more here on limiting concurrent open links.

You will need to query v$dblink to see how many links are open in your session:

SQL> select in_transaction, count(*) from v$dblink
     group by in_transaction;

IN_   COUNT(*)
--- ----------
YES          1

Here are some interesting columns and descriptions of v$dblink: 

db_link               Db link name
owner_id            Owner name
logged_on           Is the database link currently logged on?
protocol              Dblink's communications protocol
open_cursors      Are there any cursors open for the db link ?
in_transaction      Is the db link part of a transaction which has not been commited or rolled back yet ?
update_sent        Was there an update on the db link ?

dba_db_links

To gather information on all database links in your database, query dba_db_links.
You will need dba privileges to see this view, the alternatives are user_db_links and all_db_links.

A user who does not have dba privileges can query all_db_links to see which db links are available to him.

SQL> select * from dba_db_links;

OWNER            DB_LINK       HOST       CREATED
----------           ----------         ----------   ----------
GOURANG      REMOTEDB    remotedb   12-May-2014 18:10:01

Some interesting columns: 

owner           User who owns the db link, will state 'PUBLIC' if it is a public database link.
db_link         Db link name.
username      Username that was specified if it was hardcoded during the create statement, null if not        
                     specified during the create statement.
host              The tnsnames alias specified during the create statement.
created         Date and time of link creation.


Troubleshoot -- enq: TM - contention

Resolve  "enq: TM - contention " issues in Oracle

Recently, I was assisting one of our production database for Oracle trying to diagnose sessions waiting on the "enq: TM - contention" event. The blocked sessions were executing simple INSERT & UPDATE statements similar to:

INSERT INTO customer VALUES (:1, :2, :3);

Query to find blocking session details:

select sid,serial#,event, blocking_session, username,status,terminal,program,sql_id
from v$session
where BLOCKING_SESSION  IS NOT NULL;

About "enq: TM - contention" :

These kind of Waits i.e., enq: TM - contention indicate there are un-indexed foreign key constraints. Reviewing the CUSTOMER table, we found a foreign key constraint referencing the PRODUCT table that did not have an associated index. This was also confirmed with development team and verified with DDL. We added the index on the column referencing the PRODUCT table and the problem was solved.

Finding the root cause of the enq: TM - contention wait event

Using the above query to find the blocking sessions, we found the real culprit. Periodically, as the company reviewed its vendor list, they "cleaned up" the CUSTOMER  table several times a week. As a result, rows from the CUSTOMER table were deleted. Those delete statements were then cascading to the PRODUCT table and taking out TM locks on it.

Reproducing a typical problem that leads to this wait

This problem has a simple fix, but I wanted to understand more about why this happens. So I reproduced the same issue to see what happens under the covers. I first created a subset of the tables from this CUSTOMER and loaded them with sample data.

CREATE TABLE customer
( customer_id number(10) not null,
customer_name varchar2(50) not null,
contact_name varchar2(50),
CONSTRAINT customer_pk PRIMARY KEY (customer_id)
);
INSERT INTO customer VALUES (1, 'customer 1', 'Contact 1');
INSERT INTO customer VALUES (2, 'customer 2', 'Contact 2');
COMMIT;

CREATE TABLE product
( product_id number(10) not null,
product_name varchar2(50) not null,
customer_id number(10) not null,
CONSTRAINT fk_customer
FOREIGN KEY (customer_id)
REFERENCES customer(customer_id)
ON DELETE CASCADE );
INSERT INTO product VALUES (1, 'Product 1', 1);
INSERT INTO product VALUES (2, 'Product 2', 1);
INSERT INTO product VALUES (3, 'Product 3', 2);
COMMIT;

I then executed statements similar to what we found at this customer:

User 1: DELETE customer WHERE customer_id = 1;
User 2: DELETE customer WHERE customer_id = 2;
User 3: INSERT INTO customer VALUES (5, 'customer 5', 'Contact 5');

Similar to the customer's experience, User 1 and User 2 hung waiting on "enq: TM - contention". Reviewing information from V$SESSION I found the following:

-- Find details of blocking sessions
sql>
SELECT l.sid, s.blocking_session blocker, s.event, l.type, l.lmode, l.request, o.object_name, o.object_type
FROM v$lock l, dba_objects o, v$session s
WHERE UPPER(s.username) = UPPER('&User')
AND l.id1 = o.object_id (+)
AND l.sid = s.sid
ORDER BY sid, type;

-- Solution

Following along with the solution we used for our customer, we added an index for the foreign key constraint on the CUSTOMER table back to the PRODUCT table:

sql> CREATE INDEX idx_fk_customer ON product (supplier_id);

When we ran the test case again everything worked fine. There were no exclusive locks acquired and hence no hanging. Oracle takes out exclusive locks on the child table, the PRODUCT table in our example, when a foreign key constraint is not indexed.

Sample query to find unindexed foreign key constraints

Now that we know unindexed foreign key constraints can cause severe problems, here is a script that I use to find them for a specific user (this can easily be tailored to search all schemas):

SELECT * FROM (
SELECT c.table_name, cc.column_name, cc.position column_position
FROM   user_constraints c, user_cons_columns cc
WHERE  c.constraint_name = cc.constraint_name
AND    c.constraint_type = 'R'
MINUS
SELECT i.table_name, ic.column_name, ic.column_position
FROM   user_indexes i, user_ind_columns ic
WHERE  i.index_name = ic.index_name
)
ORDER BY table_name, column_position;


Common 'ORA-' errors found in PROD envs:

ORA-12154: TNS:could not resolve the connect identifier specified

Cause : This error is, almost appropriately, often the very first one you get on trying to establish a connection to your database. Simply put, it means that you have named the database you wish to be connected to and Oracle doesn’t know who the heck you’re talking about. This error is often caused by typos, maybe in your connection string, but possibly in your tnsnames.ora file. It is also possible that the tnsnames.ora file is not accessible or does not even exist. If it does, ensure that it contains the service name you are using; also go over it with a fine toothcomb to make sure it doesn’t have any unpaired parentheses or such.

Action : Cross Check tns entry, Check firwall access to connecting server

ORA-00600: internal error code, arguments: [%s], [%s],[%s], [%s], [%s]

Cause : This is a generic error; it means something serious has gone wrong and you are going to need to roll up your sleeves, dig in, and find out what. But you are not without clues. Your alert.log file will contain the path to your trace file. You will want to look in it, as a record is written to the trace file every time an ORA-00600 error occurs.
Take the information you get from your trace file and the first argument in the square brackets (the internal message number), and head over to My Oracle Support (Metalink). There you will find a handy ORA-0600 lookup tool (Note 153788.1) that will direct you to additional information that will help you solve your problem.

Action : Check with Oracle support as per your version and arguments

Note: There are lots of reason for ORA-00600, Based of arguments and oracle document you can fix it. So may bugs are described in oracle document for ORA-00600.

ORA-03113: end-of-file on communication channel

This error pretty much means that your connection has clutched its chest and died. For some reason, your client machine and the database server are acting like an old married couple and have stopped talking to each other. That reason could be one of a rather long list: has your server crashed? has someone pulled out your network cable? was your process killed at the O/S level? is your Windows log full? or maybe there is an Oracle internal error?

Do not overlook the obvious. This error is sometimes caused by the simplest of things. If, however, it is caused by an Oracle internal error, look to your alert log for further information.

ORA-01555 Snapshot Too Old

The ORA-01555 is caused by Oracle read consistency mechanism.  If you have a long running SQL that starts at 10:30 AM, Oracle ensures that all rows are as they appeared at 10:30 AM, even if the query runs until noon! Oracles does this by reading the "before image" of changed rows from the online undo segments.  If you have lots of updates, long running SQL and too small UNDO, the ORA-01555 error will appear.

From the docs we see that the ORA-01555 error relates to insufficient undo storage or a too small value for the undo_retention parameter:

ORA-01555: snapshot too old: rollback segment number string with name "string" too small

Cause: Rollback records needed by a reader for consistent read are overwritten by other writers.

Action: If in Automatic Undo Management mode, increase the setting of UNDO_RETENTION. Otherwise, use larger rollback segments.

To read : how to set optimal value for undo retention and undo tablespace size

If you are facing following error for the below query, then do the my suggested action plan. I rectified the issue in same way.

ERROR at line ..:
ORA-01555: snapshot too old: rollback segment number 0 with name "SYSTEM" too
small ...


Query to execute to find issued tablespace:

select * from(
SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file#
, greatest(e.block_id, c.block#) corr_start_block#
 , least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block#
 , least(e.block_id+e.blocks-1, c.block#+c.blocks-1)
 - greatest(e.block_id, c.block#) + 1 blocks_corrupted
 , null description
 FROM dba_extents e, v$database_block_corruption c
 WHERE e.file_id = c.file#
 AND e.block_id <= c.block# + c.blocks - 1
 AND e.block_id + e.blocks - 1 >= c.block#
 UNION
 SELECT s.owner, s.segment_type, s.segment_name, s.partition_name, c.file#
 , header_block corr_start_block#
 , header_block corr_end_block#
 , 1 blocks_corrupted
 , 'Segment Header' description
 FROM dba_segments s, v$database_block_corruption c
 WHERE s.header_file = c.file#
 AND s.segment_name is NOT NULL
 AND s.header_block between c.block# and c.block# + c.blocks - 1
 UNION
 SELECT null owner, null segment_type, null segment_name, null partition_name, c.file#
 , greatest(f.block_id, c.block#) corr_start_block#
 , least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block#
 , least(f.block_id+f.blocks-1, c.block#+c.blocks-1)
 - greatest(f.block_id, c.block#) + 1 blocks_corrupted
 , 'Free Block' description
 FROM dba_free_space f, v$database_block_corruption c
 WHERE f.file_id = c.file#
 AND f.block_id <= c.block# + c.blocks - 1
 AND f.block_id + f.blocks - 1 >= c.block#
 order by file#, corr_start_block#) t where segment_name IS NOT NULL; 


Solution with workaround:
Force an extension of the SYSTEM rollback segment to occur using the following :

connect / as sysdba
alter system set undo_management = MANUAL scope=spfile ;
shutdown immediate ;
startup ;

create table sample (c1 number, c2 varchar2(10));
 

begin
for i in 1.. 400000 loop
insert into sample values (i,'ABCDEFGH');
end loop;
end;
/

delete sample;
commit;

alter system set undo_management = AUTO scope=spfile;

shutdown immediate
startup 


Then monitor the database again.

ORA-07445 - exception encountered: core dump

Cause : An operating system exception occurred which should result in the creation of a core file. This is an internal error.

Action : Contact Oracle Customer Support.

Common precipitators of the ORA-07445 include:
-- High volume user transactions
-- Software bugs (i.e. Bug 4098853).  See note 342443.1 on MOSC.
-- Too-small RAM regions (shared_pool_size, java pool, large_pool_size), and a too-small application memory stack (e.g. PL/SQL array too small)
-- Too small undo and temp segments
-- Program errors (addressing outside of RAM region), e.g. S0C4. Improper NLS parameter settings
-- Hardware errors
-- Oracle block corruption and a host of other related issues.
-- When Oracle internal job failed with specific exception

Note: There are lots of reason for ORA-07445, Based of arguments and oracle document you can fix it. So may bugs are described in oracle document for ORA-07445.

ORA-01652: unable to extend temp segment by string in tablespace string 

Cause: Failed to allocate an extent of the required number of blocks for a temporary segment in the tablespace indicated.

Action: Use ALTER TABLESPACE ADD TEMPFILE statement to add one or more files to the tablespace indicated.

ORA-00257: archiver error. Connect internal only, until freed. 

Cause: The archiver process received an error while trying to archive a redo log. If the problem is not resolved soon, the database will stop executing transactions. The most likely cause of this message is the destination device is out of space to store the redo log file.

Action: Check archiver trace file for a detailed description of the problem. Also verify that the device specified in the initialization parameter ARCHIVE_LOG_DEST is set up properly for archiving.

ORA-01031: insufficient privileges 

Cause: An attempt was made to change the current username or password without the appropriate privilege. This error also occurs if attempting to install a database without the necessary operating system privileges. This error may occur if the user was granted the necessary privilege at a higher label than the current login.

Action: Ask the database administrator to perform the operation or grant the required privileges. For Trusted Oracle users getting this error although granted the the appropriate privilege at a higher label, ask the database administrator to regrant the privilege at the appropriate label.

For the DBA, the ORA-01031 can happen if the target OS executables do not have read and execute permissions (e.g. (770) in UNIX/Linux), and ensure that the oracle user is a member of the dba group (e.g. /etc/group).  There are similar permission in the Windows registry.

ORA-00918: column ambiguously defined

Cause :A column name used in a join exists in more than one table and is thus referenced ambiguously. In a join, any column name that occurs in more than one of the tables must be prefixed by its table name when referenced. The column should be referenced as TABLE.COLUMN or TABLE_ALIAS.COLUMN. For example, if tables EMP and DEPT are being joined and both contain the column DEPTNO, then all references to DEPTNO should be prefixed with the table name, as in EMP.DEPTNO or E.DEPTNO.

Action: Prefix references to column names that exist in multiple tables with either the table name or a table alias and a period (.), as in the examples above.

When ORA-00918 is thrown, you have a column which has been ambiguously defined.  If a column name in a join is referenced ambiguously, it exists in multiple tables.
-- Column names which occur in multiple tables should be prefixed when it is referenced by its table name.
-- Columns must be referenced as TABLE.COLUMN or TABLE_ALIAS.COLUM .  Oracle documentation which reference ORA-00918 give the following example:
-- If tables EMP and DEPT are being joined and both contain the column DEPTNO, then all references to DEPTNO should be prefixed with the table name, as in EMP.DEPTNO or E.DEPTNO.

To correct ORA-00918, references should be prefixed to column names existing in multiple tables (either with the table name or table alias and a period)

ORA-00054: resource busy and acquire with NOWAIT specified

Cause : This error usually comes when you are changing DDL but object is currently using by other user. It is not a critical error.

Action : Redefine your object using ONLINE key word where ever possible.

ORA-04031 : Unable to allocate x bytes of shared memory

Cause :
ORA-04031 - deals with shared memory issues. It's lack of allocation of contiguous memory as and when requested by the current executing sql i.e., when process attempts to allocate a large piece of contiguous memory in the shared pool fails. 

Action:
 If the shared pool is out of memory, either use the DBMS_SHARED_ POOL package to pin large packages, reduce your use of shared memory, or increase the amount of available shared memory by increasing the value of the initialization parameters SHARED_POOL_RESERVED_SIZE and SHARED_ POOL_SIZE. If the large pool is out of memory, increase the initialization parameter LARGE_POOL_SIZE.
More ORA errors...