Search

Wednesday, June 29, 2016

Plan Baseline

what paremeter modified from their default value ?

SELECT
    CHILD_NUMBER CN,
    NAME,
    VALUE,
    ISDEFAULT DEF
  FROM
      V$SQL_OPTIMIZER_ENV
    WHERE
     SQL_ID='f6rs5tka838kp'
     AND CHILD_NUMBER=3
 ORDER BY
     NAME;


CN NAME                                VALUE           DEF
-- ----------------------------------- --------------- ---
 3 _pga_max_size                       368640 KB       NO
 3 active_instance_count               1               YES
 3 bitmap_merge_area_size              1048576         YES
 3 cell_offload_compaction             ADAPTIVE        YES
 3 cell_offload_plan_display           AUTO            YES
 3 cell_offload_processing             true            YES
 3 cpu_count                           8               YES
 3 cursor_sharing                      exact           YES
 3 db_file_multiblock_read_count       128             YES
 3 hash_area_size                      131072          YES
 3 is_recur_flags                      0               YES
 3 optimizer_capture_sql_plan_baseline false           YES





Plan Baseline :
If you don’t specify a PLAN_HASH_VALUE when loading a plan baseline from memory, then Oracle will load all plans available for a SQL query in memory. The next time the query executes, the optimizer will use the lowest cost accepted plan in the plan baseline. If multiple plans exist in the plan baseline for a query, and if you have a specific plan that you want the optimizer to always use, then consider altering the plan to a FIXED state


 2.PLAN BASELINE

2.1  How To Creating a Plan Baseline for a SQL Statement in Memory
2.2  How To Creating Plan Baselines for SQL Contained in SQL Tuning Set from AWR
2.3  How To Automatically Adding Plan Baselines
2.4  How To Altering a Plan Baseline
2.5  How To Determining If Plan Baselines Exist
2.6  How To Determining if a Query is Using a Plan Baseline 
2.7  How To Displaying Plan Baseline Execution Plans 
2.8  How To Manually Adding a New Execution Plan to Plan Baseline (Evolving) 
2.9  How To Toggling the Automatic Acceptance of New Low-Cost Execution Plans 
2.10  How To Disabling Plan Baselines  
2.11  How To Removing Plan Baseline Information  
2.12  How To Transporting Plan Baselines 


2.1  How To Creating a Plan Baseline for a SQL Statement in Memory

 You’re planning a database upgrade. You know from past experience that sometimes after the database is upgraded SQL queries can perform more poorly. This is caused by the new version of the optimizer choosing a different execution plan that is less efficient than the plan used by the prior version of the optimizer. Therefore, to ensure that the optimizer consistently chooses the same execution plan before and after the upgrade, you want to establish a plan baseline for the query. You want to start by establishing plan baseline for a SQL query in memory.

The procedure for manually associating a plan baseline with a SQL statement is as follows:
 

1. Identify the SQL statement for which you want a plan baseline.
 

2. Provide an identifier such as the SQL_ID as input to the DBMS_SPM package to create a plan
baseline for the SQL statement.
 

For example, suppose you have a SQL statement you’ve been working with such as the following:
 

SQL> select first_name from emp where emp_id = 100;
 

Now query the V$SQL view to determine the SQL_ID for the query:
 

select sql_id, plan_hash_value, sql_text
from v$sql
where sql_text like 'select first_name from emp where emp_id = 100'
and sql_text not like '%v$sql';


SQL_ID                       PLAN_HASH_VALUE                       SQL_TEXT
------------- --------------- -----------------------                ----------------------------
dv73f7y69ny8z           3956160932                            select first_name from emp where emp_id = 100 


Now that the SQL_ID has been identified, use it as input to the DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE
 

function to create a plan baseline for the given query—for example:
 

DECLARE
plan1 PLS_INTEGER;
BEGIN
plan1 := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => 'dv73f7y69ny8z');
END;
/



Keep in mind that it’s possible that a single SQL statement can have more than one execution plan associated with it in memory. This can happen when the SQL executes multiple times and something in the environment changes that causes the optimizer to choose a different plan (like updated statistics, use of bind variables, changes with database initialization parameters, adding/deleting a SQL profile, and so on).You can uniquely identify a single plan via the combination of SQL_ID and the PLAN_HASH_VALUE column of V$SQL and use that as input to DBMS_SPM, for example:


DECLARE
plan1 PLS_INTEGER;
BEGIN
plan1 := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(sql_id => 'dv73f7y69ny8z',
plan_hash_value => 3956160932);
END;
/





If you don’t specify a PLAN_HASH_VALUE when loading a plan baseline from memory, then Oracle will load all plans available for a SQL query in memory. The next time the query executes, the optimizer will use the lowest cost accepted plan in the plan baseline. If multiple plans exist in the plan baseline for a query, and if you have a specific plan that you want the optimizer to always use, then consider altering the plan to a FIXED

The “Solution” section described how to identify a single SQL statement for which you want to create a plan
baseline (based on the SQL_ID) using a query in the cursor cache. There are many methods for creating a plan baseline for a query, such as using the SQL text, schema, module, and so on. For example, next a plan baseline is loaded based on a partial SQL string:
 


DECLARE
plan1 PLS_INTEGER;
BEGIN
plan1 := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(
attribute_name => 'sql_text'
,attribute_value => 'select emp_id from emp%');
END;
/



 



2.2  How To Creating Plan Baselines for SQL Contained in SQL Tuning Set from AWR

 You have the following scenario:

• You’re upgrading a database to a new version.
• You know from past experience that upgrading to newer versions of Oracle can sometimes cause SQL statements to perform poorly because the optimizer in the upgraded version of the database is choosing a less efficient (worse) execution plan than the optimizer from the prior version of the database.
• You want to ensure that a set of critical SQL statements execute with acceptable performance
after the upgrade.



To deal with this problem, use the most resource-intensive SQL queries in the AWR as candidates for the creation of plan baselines. This solution uses the technique of creating an AWR baseline. An AWR baseline is a snapshot of activity in the AWR designated by begin/end snapshot IDs. Listed next are the steps for creating and populating a SQL tuning set with high resource-consuming SQL statements found in an AWR baseline and then creating plan baselines for those queries:

1. Create an AWR baseline.
2. Create a SQL tuning set object.
3. Populate the SQL tuning set with the queries found in the AWR baseline.
4. Use the tuning set as input to DBMS_SPM to create a plan baseline for each query contained in the SQL tuning set.


Step 1: Create an AWR Baseline
The first step is to create an AWR baseline. For example, suppose you knew you had high-load queries running between two snapshots in your database. The following creates an AWR baseline using two snapshot IDs:
 

BEGIN
DBMS_WORKLOAD_REPOSITORY.create_baseline (
start_snap_id => 2150,
end_snap_id => 2155,
baseline_name => 'peak_baseline_jun14_13');
END;
/


If you’re unsure of the available snapshots in your database, you can run an AWR report or select the SNAP_ID from DBA_HIST_SNAPSHOTS:

SQL> select snap_id, begin_interval_time from dba_hist_snapshot order by 1;



Step 2: Create a SQL Tuning Set Object
 
Now create a SQL tuning set. This next bit of code creates a tuning set named test1:


BEGIN
dbms_sqltune.create_sqlset(
sqlset_name => 'test1'
,description => 'STS from AWR');
END;
/



Step 3: Populate the SQL Tuning Set with High-Resource Queries Found in AWR Baseline

 
Now the SQL tuning set (created in step 2) is populated with any queries found within the AWR baseline
(created in step 1):


DECLARE
base_cur dbms_sqltune.sqlset_cursor;
BEGIN
OPEN base_cur FOR
SELECT value(x)
FROM table(dbms_sqltune.select_workload_repository(
'peak_baseline_jun14_13', null, null,'elapsed_time',
null, null, null, 15)) x;
dbms_sqltune.load_sqlset(
sqlset_name => 'test1',
populate_cursor => base_cur);
END;
/



In the prior lines of a code, the AWR baseline name is passed to the DBMS_SQLTUNE package. The queries within the baseline are select by the elapsed time, and the top 15 are specified. To view the queries within the SQL tuning set, query the data dictionary as follows:


SELECT sqlset_name, elapsed_time
,cpu_time, buffer_gets, disk_reads, sql_text
FROM dba_sqlset_statements
WHERE sqlset_name = 'test1';



Step 4: Use the Tuning Set As Input to DBMS_SPM to Create Plan Baselines for Each Query Contained in the SQL Tuning Set
 
Now the tuning set (created in step 2 and populated in step 3) is provided as input to the DBMS_SPM package:


DECLARE
test_plan1 PLS_INTEGER;
BEGIN
test_plan1 := dbms_spm.load_plans_from_sqlset(
sqlset_name=>'test1');
END;
/


Each query in the SQL tuning set should now have an entry in the DBA_SQL_PLAN_BASELINES view showing that it has an enabled plan baseline associated with it—for example:


SQL> select sql_handle, plan_name, sql_text from dba_sql_plan_baselines;



The technique shown in the “Solution” section is a very powerful method for creating plan baselines for the most resource-consuming queries running in your database. The key to this recipe is understanding that you can use as input (to the DBMS_SPM package) queries contained in a SQL tuning set. A SQL tuning set can be populated from high resource-consuming statements found in the AWR and memory. This allows you to easily create plan baselines for the most problematic queries.

Having plan baselines in place for resource-intensive queries helps ensure that the same execution plan is used after there are changes to your system, such as a database upgrades, changes in statistics, different data sets, and so on. Keep in mind that it’s possible to have more than one accepted execution plan within the plan baseline. If you have a specific plan that you want the optimizer to always use, then consider altering the plan to a FIXED state.



2.3  How To Automatically Adding Plan Baselines

■Note By default, the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter is FALSE.

You want to automatically create plan baselines for every SQL query that repeatedly executes in your database.

Listed next are the steps for automatically creating plan baselines for SQL statements that execute more than once:
1. Set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter to TRUE (either at the session
or system level).
2. Execute two times or more the queries for which you want plan baselines captured.
3. Set the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES to FALSE.


 This next example illustrates the process for adding a plan baseline (for a query) using the prior steps. First, set the specified initialization parameter at the session level:
 

SQL> alter session set optimizer_capture_sql_plan_baselines=true;

Now a query is executed twice. Oracle will automatically create a plan baseline for a query that is run two or more times while the OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES parameter is set to TRUE:
 

SQL> select first_name from emp where emp_id=3000;
SQL> select first_name from emp where emp_id=3000;
 

Now set the initialization parameter back to FALSE.
SQL> alter session set optimizer_capture_sql_plan_baselines=false;
 

The query now should have an entry in the DBA_SQL_PLAN_BASELINES view showing that it has an enabled plan baseline associated with it—for example:
 

SELECT
sql_handle, plan_name, enabled, accepted,
created, optimizer_cost, sql_text
FROM dba_sql_plan_baselines;


Here is a partial listing of the output:
SQL_HANDLE PLAN_NAME ENA ACC...
-------------------- ------------------------------ --- ---...
SQL_790bd425fe4a0125 SQL_PLAN_7k2yn4rz4n095d8a279cc YES YES...


 Enabling OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES allows you to automatically capture plan baselines for queries running repeatedly (more than once) in your database. The “Solution” section described how to use this feature at the session level. You can also set the parameter so that all repeating queries in the database have plan baselines generated—for example:

SQL> alter system set optimizer_capture_sql_plan_baselines=true;
 

From this point, any query in the database that runs more than once will automatically have a plan baseline
created for it. We wouldn’t recommend that you do this in a production environment unless you have first carefully tested this feature and ensured that there will be no adverse side effects (from storing a plan baseline for every query executing more than once). However, you may have a test environment where you want to purposely create a plan baseline for every SQL statement that is repeatedly run.




2.4  How To Altering a Plan Baseline

 You’ve been monitoring the plan baseline of a SQL query and notice there are two execution plans with an accepted and enabled status. You’ve examined at both of the execution plans and are confident that one plan is superior to the other based on the value of OPTIMIZER_COST in DBA_SQL_PLAN_BASELINES. You want to instruct the optimizer to always use the plan with the lowest cost.


 The optimizer will give preference to plan baselines with a FIXED state. Use the DBMS_SPM package and
ALTER_SQL_PLAN_BASELINE function to alter a current plan baseline execution plan to FIXED. Here’s an example:
 

DECLARE
pf PLS_INTEGER;
BEGIN
pf := dbms_spm.alter_sql_plan_baseline(
plan_name => 'SQL_PLAN_1wskqhvrwf8g60e23be79'
,attribute_name => 'fixed'
,attribute_value => 'YES');
END;
/




You can query the FIXED column of DBA_SQL_PLAN_BASELINES to verify that it is now fixed within the baseline. Listed next is such a query:

SQL> SELECT sql_handle, plan_name, enabled, accepted, fixed FROM dba_sql_plan_baselines;


Here is some sample output:


SQL_HANDLE PLAN_NAME ENA ACC FIX
-------------------- ------------------------------ --- --- ---
SQL_457bf2f82571bd38 SQL_PLAN_4ayzkz0kr3g9s90e466fd YES YES NO
SQL_790bd425fe4a0125 SQL_PLAN_7k2yn4rz4n095d8a279cc YES YES YES



 



 2.5  How To Determining If Plan Baselines Exist

 You recently implemented a plan baseline for a query. You want to verify the configuration of a plan baseline.

 Run the following query to view details regarding any plan baselines that have been configured:

set pages 100 linesize 150
col sql_handle form a20
col plan_name form a30
col sql_text form a40
col created form a20
SELECT sql_handle, plan_name, enabled ,accepted, created, optimizer_cost, sql_text  FROM dba_sql_plan_baselines;




SQL_HANDLE                               PLAN_NAME                                           ENA     ACC
-------------------- ------------------------------ --- ---
SQL_b98d2ae2145eec3d              SQL_PLAN_bm39aw8a5xv1xae72d2f5       YES     YES
 

CREATED                                OPTIMIZER_COST                     SQL_TEXT
--------------------                     --------------                              --------------------
21-MAR-11 10.53.29.0                 2                                  select last_name from custs...



In the output, there are two key columns: the SQL_HANDLE and PLAN_NAME


Each query has an associated SQL_HANDLE that is an identifier for a query. 
Each execution plan has a unique PLAN_NAME. 
The PLAN_NAME is unique within DBA_SQL_PLAN_BASELINES, whereas one query (identified by SQL_HANDLE) can have multiple execution plans (identified by PLAN_NAME).



2.6  How To Determining if a Query is Using a Plan Baseline 

You’ve been assigned to investigate performance issues with a database and application. As a preliminary task of understanding the environment, you want to determine if any SQL statements are using plan baselines.


You can directly determine if a SQL statement currently in memory has utilized a plan baseline via the following query:
 

select sql_id, child_number, sql_plan_baseline, sql_text
from v$sql
where sql_plan_baseline is not null
and sql_text like 'select first_name%';
 

Here is a snippet of the output:

SQL_ID CHILD_NUMBER SQL_PLAN_BASELINE SQL_TEXT
------------- ------------ ------------------------------ -------------------------
898c46878bm3f 0 SQL_PLAN_31kcfbjxvdfaxc4b18358 select first_name from ...




2.7  How To Displaying Plan Baseline Execution Plans  

 You want to view any execution plans for a given plan baseline.

 If you’re not sure which plan baseline you want to view, then first query the DBA_SQL_PLAN_BASELINES view:
 

SQL> SELECT plan_name, sql_text FROM dba_sql_plan_baselines;
 

After you determine the plan name, then use the DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE function to display the execution plan and corresponding plan baseline details. This example reports details for a specific plan:

SELECT *
FROM TABLE(
DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE(plan_name=>'SQL_PLAN_5jjxkya4jrxv5d8a279cc'));


 If you want to display all plans for a SQL statement, then use as input the SQL_HANDLE parameter—for example:
 

SELECT *
FROM TABLE(
DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE(sql_handle=>'SQL_b98d2ae2145eec3d'));
 

If there are multiple plans, then each plan will be displayed in the output.




2.8  How To Manually Adding a New Execution Plan to Plan Baseline (Evolving)  

Note S tarting with Oracle Database 12c, new low-cost execution plans will automatically be entered into an existing plan baseline for a query in an accepted state. See Recipe 12-17 for details on how to modify this behavior.

 You have the following scenario:
• You have an existing plan baseline for the query.
• You have recently added an index that the query can use.
• The optimizer determines a new lower-cost plan is now available for the query and adds the new plan to the plan history in an unaccepted state.
• You notice the new plan either from a recommendation by the SQL Tuning Advisor or by querying the DBA_SQL_PLAN_BASELINES view.
• You have examined the new execution plan, have run the query in a test environment, and are confident that the new plan will result in better performance.


You want to evolve the low-cost plan in the history so that it’s moved to an accepted plan in the baseline. You realize that once the plan is accepted in the baseline, the optimizer will use it (if it’s the lowest-cost plan in the baseline).



First verify that there are plans in the unaccepted state for the query in question. Here’s a quick example:
 

SELECT sql_handle, plan_name, enabled, accepted, optimizer_cost
FROM dba_sql_plan_baselines
WHERE sql_text like '%select first_name from emp where emp_id=3000%';


Here is the output indicating there are two plans, one unaccepted but with a much lower cost:


SQL_HANDLE                  PLAN_NAME                               ENA     ACC     OPTIMIZER_COST
------------------------- ------------------------------ --- --- -------------
SQL_58c7b2f2891bf765   SQL_PLAN_5jjxkya4jrxv5c4b18358 YES  NO                  2
SQL_58c7b2f2891bf765   SQL_PLAN_5jjxkya4jrxv5d8a279cc YES  YES                34



Use the DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE function to move a plan from the history to the baseline (evolve the plan). In this example, the SQL handle (unique SQL string associated with a SQL statement) is used to evolve a plan:

SET SERVEROUT ON SIZE 1000000
SET LONG 100000
DECLARE
rpt CLOB;
BEGIN
rpt := DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE(
sql_handle => 'SQL_58c7b2f2891bf765');
DBMS_OUTPUT.PUT_LINE(rpt);
END;
/


One key feature of SQL plan management is that when a new low-cost plan is generated by the query optimizer, if the new low-cost plan has a lower cost than the accepted plan(s) in the plan baseline, the new low-cost plan will automatically be added to the query’s plan history in an unaccepted state.
You can choose to accept this new low-cost plan, which then moves it into the plan baseline as accepted. Moving an unaccepted execution plan from the plan history to the plan baseline (ENABLED and ACCEPTED) is known as evolving the plan baseline.
 

Why would a new plan ever be generated by the optimizer? There are several factors that would cause the
optimizer to create a new execution plan that doesn’t match an existing one in the plan baseline:
• New statistics are available.
• A new SQL profile has been assigned to the query.
• An index has been added or dropped.
 

This gives you a powerful technique to manage and use new plans as they become available. We should also
point out again that starting with Oracle Database 12c, new low-cost execution plans that are added to the plan history will automatically be evolved (added to the plan baseline). See Recipe 12-17 for details on modifying this default behavior.


You can use the DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE function in the following modes:
• Specify the name of the plan to evolve.
• Provide a list of plans to evolve.
• Run it with no value, meaning that Oracle will evolve all non-accepted plans contained within the plan baseline repository.


If Oracle determines that there is an unaccepted plan with a lower cost, then you’ll see output similar to this indicating that the plan has been moved to the accepted state (evolved):

 


2.9  How To Toggling the Automatic Acceptance of New Low-Cost Execution Plans 

 In Oracle Database 12c, if a plan baseline exists for a query, and a new lower cost execution plan is generated by the optimizer, the new execution plan will automatically be entered into the plan baseline in an accepted state. You want to change the behavior of Oracle so as to not automatically add new execution plans to the plan baseline.


You can disable the automatic acceptance of a plan added to the plan baseline via the following code:

BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER('SYS_AUTO_SPM_EVOLVE_TASK',
'ACCEPT_PLANS', 'false');
END;
/


New in Oracle Database 12c, execution plans will automatically be evolved for you when the daily
SYS_AUTO_SPM_EVOLVE_TASK job runs. You can verify that plans will be automatically evaluated and potentially moved to the accepted state via this query:


select parameter_name, parameter_value
from dba_advisor_parameters
where task_name = 'SYS_AUTO_SPM_EVOLVE_TASK'
and parameter_name = 'ACCEPT_PLANS';


You can re-enable the automatic acceptance of plans with this code:
BEGIN
DBMS_SPM.SET_EVOLVE_TASK_PARAMETER('SYS_AUTO_SPM_EVOLVE_TASK',
'ACCEPT_PLANS', 'true');
END;




2.10  How To Disabling Plan Baselines  

 You’re working with a test database that has many SQL statements with associated plan baselines. You want to determine what the performance difference would be without the plan baselines enabled and therefore want to temporarily disable the use of plan baselines.

 To disable the use of any SQL plan baselines within the database, set the OPTIMIZER_USE_SQL_PLAN_BASELINES initialization parameter to FALSE:

SQL> alter system set optimizer_use_sql_plan_baselines=false scope=memory;


The prior line disables the use of the plan baselines at the SYSTEM level and records the value in memory (but not in the server parameter file). To re-enable the use of plan baselines, set the value back to TRUE.
You can also set the OPTIMIZER_USE_SQL_PLAN_BASELINES at the session level. This disables the use of plan baselines for the duration of the session for the currently connected user:


SQL> alter session set optimizer_use_sql_plan_baselines=false;



The default value for OPTIMIZER_USE_SQL_PLAN_BASELINES is TRUE, which means by default, if plan baselines are available, they will be used. When enabled, the optimizer will look for a valid plan baseline execution plan for the given SQL query and choose the one with the lowest cost. This gives you a quick and easy way to disable/enable the use of plan baselines within your entire database or specific to a session. If you want to disable the use of one specific plan baseline, then alter its state to DISABLED (by setting the ENABLED attribute to a value of NO):

DECLARE
pf PLS_INTEGER;
BEGIN
pf := dbms_spm.alter_sql_plan_baseline(
plan_name => 'SQL_PLAN_4ayzkz0kr3g9s6afbe2b3'
,attribute_name => 'ENABLED'
,attribute_value => 'NO');
END;
/




2.11  How To Removing Plan Baseline Information  

 You have several plan baselines that you no longer want to use and therefore want to remove them.
 

Solution

First, determine which plan baselines exist for your database:


SQL> select plan_name, sql_handle, optimizer_cost from dba_sql_plan_baselines;


 Once you have either the PLAN_NAME or the SQL_HANDLE, you can drop a single plan baseline. This removes a single execution plan from the plan baseline using the PLAN_NAME parameter:

DECLARE
plan_name1 PLS_INTEGER;
BEGIN
plan_name1 := DBMS_SPM.DROP_SQL_PLAN_BASELINE(
plan_name => 'SQL_PLAN_bm39aw8a5xv1x519fc7bf');
END;
/


You can also drop all plans associated with a SQL statement. This example removes all plans associated with a SQL statement using the SQL_HANDLE parameter:


DECLARE
sql_handle1 PLS_INTEGER;
BEGIN
sql_handle1 := DBMS_SPM.DROP_SQL_PLAN_BASELINE(
sql_handle => 'SQL_b98d2ae2145eec3d');
END;
/



You may occasionally want to remove SQL plan baselines for the following reasons:
You have old plans that ar • en’t used anymore because more efficient plans (evolved)
are available for a SQL statement (see Recipe 12-14 for determining if a query is usinga plan baseline).
• You have plans that were never accepted and now want to remove them.
• You have plans that were created for testing environments that are no longer needed.


As shown in the “Solution” section, you can remove a specific plan baseline via the PLAN_NAME parameter. This will remove one specific plan. If you have several plans associated with one SQL statement, you can remove all plan baselines for that SQL statement via the SQL_HANDLE parameter.


If you have a database where you want to clear out all plans, then you can encapsulate the call
DBMS_SPM.DROP_SQL_PLAN_BASELINE within a PL/SQL block that drops all plans by looping through any plan found in DBA_SQL_PLAN_BASELINES:
 

SET SERVEROUT ON SIZE 1000000
DECLARE
sql_handle1 PLS_INTEGER;
CURSOR c1 IS
SELECT sql_handle
FROM dba_sql_plan_baselines;
BEGIN
FOR r1 IN c1 LOOP
sql_handle1 := DBMS_SPM.DROP_SQL_PLAN_BASELINE(sql_handle => r1.sql_handle);
DBMS_OUTPUT.PUT_LINE('PB dropped for SH: ' || r1.sql_handle);
END LOOP;
END;
/




2.12  How To Transporting Plan Baselines


You have a test environment, and you want to ensure that all of the plan baselines in the test system are moved to a production database.

Follow these steps to transport plan baselines:

1. Create a table using the DBMS_SPM package and CREATE_STGTAB_BASELINE procedure.
2. Populate the table with plan baselines using the DBMS_SPM.PACK_STGTAB_BASELINE function.
3. Copy the staging table to the destination database using a database link or Data Pump.
4. Import the plan baseline information using the DBMS_SPM.UNPACK_STGTAB_BASELINE function.
 

This example first uses the DBMS_SPM package to create a table named EXP_PB:
 

BEGIN
DBMS_SPM.CREATE_STGTAB_BASELINE(table_name => 'exp_pb');
END;
/


Next the EXP_PB table is populated with plan baselines created by the database user MV_MAINT:
 

DECLARE
pbs NUMBER;
BEGIN
pbs := DBMS_SPM.PACK_STGTAB_BASELINE(
table_name => 'exp_pb',
enabled => 'yes',
creator => 'MV_MAINT');
END;
/



The prior code populates the table with all plan baselines created by a user. You can also populate the table by PLAN_NAME, SQL_HANDLE, SQL_TEXT, or various other criteria. The only mandatory parameter is the name of the table to be populated. Now copy the staging table to the destination database. You can use a database link or Data Pump to accomplish this.


Lastly, on the destination database, use the DBMS_SPM.UNPACK_STGTAB_BASELINE function to take the contents of the EXP_PB table and create plan baselines:

DECLARE
pbs NUMBER;
BEGIN
pbs := DBMS_SPM.UNPACK_STGTAB_BASELINE(
table_name => 'exp_pb',
enabled => 'yes');
END;
/


You should now have all of the plan baselines transferred to your target database. You can query
DBA_SQL_PLAN_BASELINES to verify this.




It’s a fairly easy process to create a table, populate it with plan baseline information, copy the table, and the import its contents into the destination database. As shown in step 2 of the “Solution” section of this recipe, the PACK_STGTAB_BASELINE function is used (see Table 12-9) for details on parameters to this function). This function allows quite a bit of flexibility in what types of plan baselines you want exported. You can limit the plan baselines extracted to a specific user, or enabled, or accepted, and so on.


Tracing

Tracing the SQL Execution:

Oracle now recommends that you use the DBMS_MONITOR package for most types of tracing.

we show how to trace sessions by setting various Oracle events, the setting of which is often requested by Oracle Support. 

You'll learn
  1. How to trace a Specific SQL statement
  2. How to Enabling Tracing in Your Own Session 
  3. How to Finding the Trace Files  ( doubt )
  4. How to Examining a Raw SQL Trace File
  5. How to Analyzing Oracle Trace Files
  6. How to Formatting Trace Files with TKPROF
  7. How to Analyzing TKPROF Output
  8. How to Analyzing Trace Files with Oracle Trace Analyzer
  9. How to trace a parallel queries
  10. How to trace Specific Parallel Query Processes 
  11. How to Trace Parallel Queries in a RAC System
  12. How to Consolidating Multiple Trace Files
  13. How to Find the Correct Session for Tracing 
  14. How to trace a SQL Session 
  15. How to trace a Session by Process ID 
  16. How to trace Multiple SQL Sessions  belong to single user
  17. How to Tracing an Instance or a Database 
  18. How to Generating an Event 10046 Trace for a Session 
  19. How to Generating an Event 10046 Trace for an Instance 
  20. How to Setting a Trace in a Running Session 
  21. How to Enabling Tracing in a Session After a Login -Trigger
  22. How to trace the Oracle optimizer's execution path using 10053
  23. How to Generating Automatic Oracle Error Traces 
  24. How to Tracing a Background Process 
  25. How to Enabling Oracle Listener Tracing 
  26. How to Setting Archive Tracing for Data Guard 

 Oracle provides the TKPROF utility as well as the freely downloadable profiler named Oracle Trace Analyzer.

how to use both of these profilers to analyze the raw trace files you generate. 



Preparing Your Environment  before starting tracing

 1. Enable timed statistics collection.
2. Specify a destination for the trace dump file.
3. Adjust the trace dump file size.


sho parameter statistics
NAME TYPE VALUE
------------------------------------ ----------- -----------
statistics_level string TYPICAL
timed_statistics boolean TRUE



SQL> alter system set timed_statistics=true scope=both;
System altered.


You can also set this parameter at the session level with the following statement:
SQL> alter session set timed_statistics=true;


You can find the location of the trace directory with the following command:
SQL> select name,value from v$diag_info where name='Diag Trace'

NAME VALUE
----------------------------- ----------------------------------------
Diag Trace c:\app\ora\diag\rdbms\orcl1\orcl1\trace


EX : <diagnostic_dest>/diag/rdbms/<dbname>/<instance>/trace

The default value of the max_dump_file_size parameter is unlimited to verify :
SQL> sho parameter dump
 

NAME TYPE VALUE
------------------------------------ ----------- ----------
...
max_dump_file_size string unlimited


 An unlimited dump file size means that the file can grow as large as the operating system permits.



3. How to Finding the Trace Files (doubt still )

1How to trace a Specific SQL statement

Trace a specific SQL statement, in order to find out where the database is spending its time during the
execution of the statement.


how to trace a SQL statement as a one-off operation

 1. Issue the alter session set events statement, as shown here, to set up the trace.
SQL> alter session set events 'sql_trace level 12';

2. Execute the SQL statements.
SQL> select count(*) from sales;


3. Set tracing off.
SQL> alter session set events 'sql_trace off';



To trace specific SQL statements by specifying the SQL ID of a statement

 1. Find the SQL ID of the SQL statement by issuing this statement:
SQL> select sql_id,sql_text from v$sql where sql_text='select sum(quantity_sold) from sales';
 

SQL_ID SQL_TEXT
---------------- ------------------------------------
fb2yu0p1kgvhr select sum(quantity_sold) from sales


2. Set tracing on for the specific SQL statement whose SQL ID you’ve retrieved.
SQL> alter session set events 'sql_trace [sql:fb2yu0p1kgvhr] level 12';

3. Execute the SQL statement.
SQL> select sum(quantity_sold) from sales;
 

SUM(QUANTITY_SOLD)
------------------
918843


4. Turn off tracing.
SQL> alter session set events 'sql_trace[sql:fb2yu0p1kgvhr] off';
Session altered. 

 

You can trace multiple SQL statements by separating the SQL IDs with the pipe (|) character
Note: Even if you execute multiple SQL statements before you turn the tracing off, the trace file will show just the information pertaining to the SQL_ID or SQL_IDs you specify.

SQL> alter session set events ‘sql_trace [sql: fb2yu0p1kgvhr|4v433su9vvzsw]‘;
 

You can trace a specific SQL statement running in a different session and other session completes executing the SQL statement, turn off tracing with the following command:

SQL> alter system set events 'sql_trace[sql:fb2yu0p1kgvhr] level 12';


after completed in the other session we can turn it off...
SQL> alter system set events 'sql_trace[sql:fb2yu0p1kgvhr] off';




2. How to Enabling Tracing in Your Own Session


Ordinary users can use the DBMS_SESSION package to trace their own sessions
 

SQL>execute dbms_session.session_trace_enable(waits=>true, binds=> false);



To disable tracing, the user must execute the session_trace_disable procedure


SQL> execute dbms_session.session_trace_disable();



The DBMS_MONITOR package, which Oracle recommends for all tracing, is by default executable only by a user with the DBA role.






3. How to Finding the Trace Files (doubt still )


To set an identifier for your trace files, before you start generating the trace:

SQL> alter session set tracefile_identifier='MyTune1';



To view the most recent trace files the database has created

adrci> show tracefile -t
05-NOV-13 11:44:54 diag/rdbms/orcl/orcl/trace/orcl_ckpt_3047.trc
05-NOV-13 11:50:06 diag/rdbms/orcl/orcl/trace/alert_orcl.log


To find out the path to your current session’s trace file

SQL> select value from v$diag_info where name = 'Default Trace File';

VALUE
-------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_4287.trc



To find all trace files for the current instance

SQL> select value from v$diag_info where name = 'Diag Trace'



4. How to Examining a Raw SQL Trace File


Open the trace file in a text editor to inspect the tracing information. Here are portions of a raw SQL trace generated by executing the dbms_monitor.session_trace_enable procedure:

PARSING IN CURSOR #3 len=490 dep=1 uid=85 oct=3 lid=85 tim=269523043683 hv=672110367
ad='7ff18986250' sqlid='bqasjasn0z5sz'
PARSE #3:c=0,e=647,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=0,tim=269523043680
EXEC #3:c=0,e=1749,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=3969568374,tim=269523045613
WAIT #3: nam='Disk file operations I/O' ela= 15833 FileOperation=2 fileno=4 filetype=2 obj#=-1
tim=269523061555
FETCH #3:c=0,e=19196,p=0,cr=46,cu=0,mis=0,r=1,dep=1,og=1,plh=3969568374,tim=269523064866
STAT #3 id=3 cnt=12 pid=2 pos=1 obj=0 op='HASH GROUP BY (cr=46 pr=0 pw=0 time=11 us cost=4 size=5317
card=409)'
STAT #3 id=4 cnt=3424 pid=3 pos=1 obj=89079 op='TABLE ACCESS FULL DEPT (cr=16 pr=0 pw=0 time=246 us
cost=3 size=4251 card=327)'

As you can see from this excerpt of the raw trace file, you can glean useful information, such as parse misses, waits, and the execution plan of the SQL statement.

Parse: During this stage, the database converts the SQL statement into an execution plan
and checks for authorization and the existence of tables and other objects.

Execute: The database executes the SQL statement during this phase. For a SELECT
statement, the execute phase identifies the rows the database must retrieve. The database
modifies the data for DML statements such as insert, update, and delete.

Fetch: This step applies only for a SELECT statement. During this phase, the database
retrieves the selected rows.


5. How to Analyzing Oracle Trace Files


There are multiple ways to interpret a SQL trace file.
• Read the raw SQL trace file in a text editor.
• Use the Oracle-provided TKPROF (Trace Kernel Profiler) utility.
• Use Oracle Trace Analyzer, a free product you can download from Oracle Support.
• Use third-party tools.



6. How to Formatting Trace Files with TKPROF

TKPROF utility from the command line. Here’s an example of a typical tkprof command for formatting
a trace file.


$ tkprof user_sql_001.trc user1.prf explain=hr/hr table=hr.temp_plan_table_a sys=no
sort=exeela,prsela,fchela

 trace file as input : user_sql_001.trc
output file named  :  user1.prf.


$ tkprof
Usage: tkprof tracefile outputfile [explain= ] [table= ]
[print= ] [insert= ] [sys= ] [sort= ]


filename1: Specifies the name of the trace file
filename2: Specifies the formatted output file
waits: Specifies whether the output file should record a summary of the wait events; default is yes.
sort: By default, TKPROF lists the SQL statements in the trace file in the order they were executed. You can specify various options with the sort argument to control the order in which TKPROF lists the various SQL statements.

prscpu: CPU time spent parsing
prsela: Elapsed time spent parsing
execpu: CPU time spent executing
exeela: Elapsed time spent executing
fchela: Elapsed time spent fetching

print: By default TKPROF will list all traced SQL statements. By specifying a value for the print option, you can limit the number of SQL statements listed in the output file.
sys: By default TKPROF lists all SQL statements issued by the user SYS, as well as recursive statements. Specify the value no for the sys argument to make TKPROF omit these statements.
explain: Writes execution plans to the output file; TKPROF connects to the database and issues explain plan statements using the username and password you provide with this parameter.
table: By default, TKPROF uses a table named PLAN_TABLE in the schema of the user specified by the explain parameter, to store the execution plans. You can specify an alternate table with the table parameter.
width: This is an integer that determines the output line widths of some types of output, such as the explain plan information.



7.How to Analyzing TKPROF Output

format a trace file with TKPROF, and you now want to analyze the TKPROF output file

c:\>tkprof orcl1_ora_6448_mytrace1.trc ora6448.prf explain=hr/hr sys=no sort=prsela,exeela,fchela

orcl1_ora_6448_mytrace1.trc ---> trace file name
ora6448.prf --- > output file

Information we can view in tkprof

1.Header
2.Execution Statistics
3.Row Source Operations
4.The Execution Plan
5.Wait Events



8. How to Analyzing Trace Files with Oracle Trace Analyzer

The Oracle Trace Analyzer, also known as TRCANLZR or TRCA, is a SQL trace profiling tool that’s an alternative to the TKPROF utility. You must download the TRCA from Oracle Support. Once you download TRCA, unzip the files and install TRCA by executing the /trca/install/trcreate.sql script.

you must log in as a user with the SYSDBA privilege to execute the tacreate.sql script.
The tacreate.sql generates the formatted output files for any traces you’ve generated.
The script asks you for information relating to the location of the trace files, the output file, and the tablespace where you want TRCA to store its data.

1. Installing TRCA is straightforward,
SQL> @tacreate.sql
Uninstalling TRCA, please wait
TADOBJ completed.
SQL>
SQL> WHENEVER SQLERROR EXIT SQL.SQLCODE;
SQL> REM If this DROP USER command fails that means a session is connected with this user.
SQL> DROP USER trcanlzr CASCADE;
SQL> WHENEVER SQLERROR CONTINUE;
SQL>
SQL> SET ECHO OFF;
TADUSR completed.
TADROP completed.
Creating TRCA$ INPUT/BDUMP/STAGE Server Directories
...
TACREATE completed. Installation completed successfully.
SQL>


2. Set up tracing.
SQL> alter session set events '10046 trace name context forever, level 12';
System altered.

3. Execute the SQL statement you want to trace.
SQL> select ...

4. Turn off tracing.
SQL> alter session set events '10046 trace name context off';
System altered.

5. Run the /trca/run/trcanlzr script (START trcanlzr.sql) to profile the trace you’ve just generated. You must pass the trace file name as input to this script:
c:\trace\trca\trca\run>sqlplus hr/hr
SQL> START trcanlzr.sql orcl1_ora_7460_mytrace7.trc

Parameter 1:
Trace Filename or control_file.txt (required)
Value passed to trcanlzr.sql:
TRACE_FILENAME: orcl1_ora_7460_mytrace7.trc
Analyzing orcl1_ora_7460_mytrace7.trc
... analyzing trace(s) ...
Trace Analyzer completed.
Review first trcanlzr_error.log file for possible fatal errors.
...
233387 08/14/2013 15:59 trca_e21106.html
115885 08/14/2013 15:59 trca_e21106.txt
File trca_e21106.zip has been created
TRCANLZR completed.
SQL>
c:\trace\trca\trca\run>


You can now view the profiled trace data in text or HTML format—TRCA provides both of these in the ZIP file that it creates when it completes profiling the trace file. TRCA places the ZIP file in the directory from which you run the  /trca/run/trcanlzr.sql script.


Here are the major sections of a TRCA report, and as you can see already, the report offers a richer set of
diagnostic information than that offered by TKPROF.

Summary: Provides a breakdown of elapsed time, response time broken down into CPU and non-idle wait time, and other response time-related information

Non-Recursive Time and Totals: Provides a breakdown of response time and elapsed time during the parse, execute, and fetch steps; the report also contains a table that provides total and average waits for each idle and non-idle wait event.

Top SQL: Provides detailed information about SQL statements that account for the most response time, elapsed time, and CPU time, as shown in the following extract from the report:

There are 2 SQL statements with "Response Time Accounted-for" larger than threshold of 10.0% of the "Total Response Time Accounted-for". These combined 2 SQL statements are responsible for a total of 99.3% of the "Total Response Time Accounted-for".

There are 3 SQL statements with "Elapsed Time" larger than threshold of 10.0% of the "Total Elapsed Time". These combined 3 SQL statements are responsible for a total of 75.5% of the "Total Elapsed Time".

There is only one SQL statement with "CPU Time" larger than threshold of 10.0% of the "Total CPU Time".

Individual SQL: This is a highly useful section, as it lists all SQL statements and shows their elapsed time, response time, and CPU time. It provides the hash values and SQL IDs of each statement.

SQL Self - Time, Totals, Waits, Binds and Row Source Plan: Shows parse, execute, and fetch statistics for each statement, similar to the TKPROF utility; it also shows the wait event
breakdown (average and total times) for each statement. There’s also a very nice explain
plan for each statement, which shows the time and the cost of each execution step.

Tables and Indexes: Shows the number of rows, partitioning status, the sample size, and the last time the object was analyzed; for indexes, it additionally shows the clustering factor and the number of keys.

Summary: Shows I/O related wait (such as the db file sequential read event) information including average and total waits, for tables and indexes

Hot I/O Blocks: Shows the list of blocks with the largest wait time or times waited Non-default Initialization Parameters: Lists all non-default initialization parameters

As this brief review of TRCA shows, it’s a far superior tool than TKPROF. Besides, if you happen to love TKPROF reports, it includes them as well in its ZIP file. So, what are you waiting for? Download the TRCA and benefit from its rich diagnostic profiling of problem SQL statements.



9. How to trace a parallel queries (and consolidate them ) 


You can get an event 10046 trace for a parallel query in the same way as you would for any other query.The only difference is that the 10046 event will generate as many trace files as the number of parallel query servers.

SQL>alter session set tracefile_identifier='MyTrace1';

SQL> alter session set events '10046 trace name context forever, level 12';

SQL> select /*+ full(sales) parallel (sales 6) */ count(quantity_sold) from sales;

COUNT(QUANTITY_SOLD)
--------------------
918843

SQL> alter session set events '10046 trace name context off';
Session altered.


You’ll now see a total of seven trace files with the trace file identifier MyTrace1 in the trace directory. Depending on what you’re looking for, you can analyze each of the trace files separately or consolidate them into one big trace file with the trcsess utility before analyzing it with TKPROF or another profiler such as the Oracle Trace Analyzer. You’ll also find several files with the suffix .trm in the trace directory—you can ignore these files, as they are for use by the database.

Once you turn off the trace, go to the trace directory and execute the following command to find all the trace files for the parallel query:

$ find . -name '*MyTrace1*'

The find command lists all the trace files for your parallel query (ignore the files ending with .trm in the tracedirectory). You can move the trace files to another directory and use the trcsess utility to consolidate those files, as shown here:

$ trcsess output=MyTrace1.trc clientid='px_test1' orcl1_ora_8432_mytrace1.trc orcl1_ora_8432_
mytrace2.trc



10. How to trace Specific Parallel Query Processes

To trace one or more specific parallel query processes.

Identify the parallel query processes you want to trace with the following command.

SQL> select inst_id,p.server_name,
p.status as p_status,
p.pid as p_pid,
p.sid as p_sid
from gv$px_process p
order by p.server_name;

Let’s say you decide to trace the processes p002 and p003. Issue the following alter system set events
command to trace just these two parallel processes.

SQL> alter system set events ‘sql_trace {process: pname = p002 | p003}’;

Once you’re done tracing, turn off the trace by issuing the following command:
SQL> alter system set events ‘sql_trace {process: pname = p002 | p003} off’;



11.How to Trace Parallel Queries in a RAC System

To tracing a parallel query in a RAC environment but aren’t sure in which instance the trace files are located


Finding the trace files for the server (or thread or slave) processes is sometimes difficult in a RAC environment, because you aren’t sure on which node or node(s) the database has created the trace files. Here are the steps to follow to make it easier to find the trace files on the different nodes.


1. Set the px_trace with an alter session command, to help identify the trace files, as shown here:
SQL> alter session set tracefile_identifier='10046';
SQL> alter session set "_px_trace" = low , messaging;
SQL> alter session set events '10046 trace name context forever,level 12';

2. Execute your parallel query.  --- this is sample parallel query
SQL> alter table bigsales (parallel 4);
SQL> select count(*) from bigsales;

3. Turn all tracing off.
SQL> alter session set events '10046 trace name context off';
SQL> alter session set "_px_trace" = none;

Specifying px_trace will cause the query coordinator’s trace file to include information about the slave processes that are part of the query and the instance each slave process belongs to. You can then retrieve the trace files from the instances listed in the query coordinator’s trace file.


The _px_trace (px trace) parameter is an undocumented, internal Oracle parameter that has existed since the 9.2 release. Once you run the trace commands as shown in the “Solution” section of this recipe, the trace file for the query coordinator (QC) process will show within it the name of each of the slave processes and the instances the processes have run on—for example:

Acquired 4 slaves on 1 instances avg height=4 in 1 set q serial:2049
P000 inst 1 spid 7512
P001 inst 1 spid 4088
P002 inst 1 spid 7340
P003 inst 1 spid 9256

In this case, you know that Instance 1 is where you must look to get the trace files for the slave processes P000,P001, P002, and P003. On Instance 1, in the ADR trace subdirectory, look for file names that contain the words P000 (or P001/P002/P003), to identify the correct trace files.



12.How to Consolidating Multiple Trace Files

You have generated multiple trace files for a session in order to tune performance, and you want to consolidate those files into a single trace file.

Use the trcsess command to merge multiple trace files into a single trace file. Here’s a simple example:

c:\trace> trcsess output=combined.trc session=196.614 orcl1_ora_8432_mytrace1.trc orcl1_ora_8432_
mytrace2.trc

The trcsess command shown here combines two trace files generated for a session into a single trace file. The session parameter identifies the session with a session identifier, consisting of the session index and session serial number, which you can get from the V$SESSION view.


The trcsess utility is part of the Oracle database and helps by letting you consolidate multiple trace files during performance tuning and debugging exercises. Here’s the syntax of the trcsess command:

trcsess [output=output_file_name]
[session=session_id]
[client_id=cleint_id]
[service=service_name]
[action=action_name]
[module=module_name]
[trace_files]

 $ trcsess output=main.trc session=196.614

In our example, we specified the name of the consolidated trace file with the output option. If you don’t specify the output option, trcsess prints the output to standard out. Once you use trcsess to combine the output of multiple trace files into one consolidated file, you can use the TKPROF utility to analyze the file, just as you’d do in the case of a single trace file.



13. How to Find the Correct Session for Tracing

To initiate a session trace for a user from your own session, and you would like to find out the correct session to trace.

You must have the SID and the serial number for the user whose session you want to trace. You can find these from the V$SESSION view, of course, once you know the user’s name. However, you must get several other details about the user’s session to identify the correct session, since the user may have multiple sessions open. Use the following query to get the user’s information:

SQL> select a.sid, a.serial#, b.spid, b.pid,
a.username, a.osuser, a.machine
from
v$session a,
v$process b
where a.username IS NOT NULL
and a.paddr=b.addr;

The query provides several attributes such as USERNAME, OSUSER, and MACHINE, which help you unambiguously select the correct session.


You can’t always rely on the first set of SID and serial number you manage to find for the user whose session you want to trace. Together, the SID and serial number uniquely identify a session. However, you may find multiple SID and serial number combinations for the same user, because your database may be using common user logins. Therefore, querying the V$SESSION view for other information such as OSUSER and MACHINE besides the SID and serial number helps to identify the correct user session.

V$SESSION view columns such as COMMAND, SERVER, LOGON_TIME, PROGRAM, and LAST_CALL_ET help identify the correct session to trace. The LAST_CALL_ET column stands for Elasped Time of the Last Call, and following is how Oracle documentation describes this column:
• If the session STATUS is currently ACTIVE, then the value represents the elapsed time (in
seconds) since the session has become active.
• If the session STATUS is currently INACTIVE, then the value represents the elapsed time (in
seconds) since the session has become inactive.
If you still can’t find the correct session, you may want to join the V$SESSION and V$SQLAREA views to identify the correct session.



14.How to trace a SQL Session


Note:SQL tracing does impose an overhead on the database—you need to be very selective in tracing sessions in a production environment, as a trace can fill up a disk or affect CPU usage adversely.

To turn on SQL tracing for a session to diagnose a performance problem.

There are multiple ways to trace a session, but the Oracle-recommended approach is to use the DBMS_MONITOR package to access the SQL tracing facility. To trace a session, first identify the session using the SQL statement shown point 13

Once you get the SID and SERIAL# from the query shown in Recipe 10-14, invoke the session_trace_enable procedure of the DBMS_MONITOR package, as shown here:

SQL> execute dbms_monitor.session_trace_enable(session_id=>138,serial_num=>242, waits=>true,binds=>false);

In this example, we chose to trace the wait information as well, but it’s optional. Once you execute this command have the user execute the SQL statements that you’re testing (in a dev or test environment). In a production environment, wait for a long enough period to make sure you’ve captured the execution of the SQL statements, before turning the tracing off. Invoke the session_trace_disable procedure to disable the SQL tracing for the session, as shown here:

SQL> execute dbms_monitor.session_trace_disable(138,242);
PL/SQL procedure successfully completed.

Once you complete tracing the session activity, you can get the trace file for the session from the trace directory and use the TKPROF utility (or a different profiler) to get a report. To trace the current user session, use the following pair of commands:

SQL> execute dbms_monitor.session_trace_enable();
SQL> execute dbms_monitor.session_trace_disable();


Tracing an entire session is expensive in terms of resource usage and you must do so only when you haven't identifieda poorly performing SQL statement already. A session trace gathers the following types of information.

 • Physical and logical reads for each statement that's running in the session
• CPU and elapsed times
• Number of rows processed by each statement
• Misses in the library cache
• Number of commits and rollbacks
• Row operations that show the actual execution plan for each statement
• Wait events for each SQL statement

You can specify the following parameters for the session_trace_enable procedure:
session_id: Identifies the session you want to trace (SID); if you omit this, your own session will be traced.
serial_num: Serial number for the session
waits: Set it to true if you want to capture wait information (default = false).
binds: Set it to true to capture bind information (default=false).
plan_stat: Determines the frequency with which the row source statistics (execution plan and execution statistics) are dumped

All the parameters for the session_trace_enable procedure are self-evident, except the plan_stat parameter.
You can set the following values for this parameter:
never: The trace file won’t contain any information about row source operations.
first_execution (same as setting the plan_stat parameter to the value null): Row source information is written once, after the first execution of a statement.
all_executions: Execution plan and execution statistics are written for each execution of the cursor, instead of only when the cursor is closed.

Since an execution plan for a statement can change during the course of a program run, you may want to set the plan_stat parameter to the value all_executions if you want to capture all possible execution plans for a statement.



15. How to trace a Session by Process ID

To identify and trace a session using an operating system process ID.


Execute the alter session (or alter system) set events command to trace a session by its operating system
process ID, which is shown by the SPID column in the V$PROCESS view. The general format of this command is as follows:

alter system set events 'sql_trace {process:pid}'

Here are the steps to tracing a session by its OS PID.

1. Get the OS process ID by querying the V$PROCESS view.
SQL> select spid,pname from v$process;

2. Once you identify the SPID of the user, issue the following statement to start the trace for that session:
SQL> alter system set events 'sql_trace {process:2714}';

3. Turn off tracing the following way:
SQL> alter system set events 'sql_trace {process:2714} off';

4. You can also execute the set events command in the following manner, to trace two processes at once:
SQL> alter system set events 'sql_trace {process:2714|2936}';


SQL> alter system set events 'sql_trace {process:2714|2936} off';

When you trace tow processes simultaneously, Oracle generates two separate trace files, one for each process, as shown here:

orcl1_ora_2714.trc
orcl1_ora_2936.trc

How It Works
The alter system set events command allows you to trace a process by specifying the process ID (PID), process name (PNAME), or the Oracle Process ID (ORAPID). Here’s the syntax of the command:

alter session set events ‘sql_trace {process : pid = <pid>, pname = <pname>,
orapid = <orapid>} rest of event specification’

The V$PROCESS view contains information about all currently active processes. In the V$PROCESS view, the following columns help you identify the three process-related values:

PID: the Oracle process identifier
SPID: the Operating System process identifier
PNAME: name of the process

In this recipe, we showed how to generate a trace file using the OS process identifier (SPID column in the
V$PROCESS view). You can use the general syntax shown here to generate a trace using the PID or the process name.



16.How to trace Multiple SQL Sessions belongs to one user

To trace multiple SQL sessions that belong to a single user.

You can trace multiple sessions that belong to a user by using the client_id_trace_enable procedure from the DBMS_MONITOR package. Before you can execute the dbms_monitor.client_id_trace_enable procedure, you must set the client_identifier for the session by using the DBMS_SESSION package, as shown here:

SQL> execute dbms_session.set_identifier('MySQLTune1')

Once you set the client identifier as shown here, the client_identifier column in the V$SESSION view is
populated. You can confirm the value of the client_identifier column by executing the following statement:

SQL> select sid, serial#,username from v$session where client_identifier='MySQLTune1';

Now you can execute the dbms_monitor.client_id_trace_enable procedure:

SQL> execute dbms_monitor.client_id_trace_enable(client_id=>'SH', waits=>true, binds=>false);

You can disable the trace with the following command:

SQL> execute dbms_monitor.client_id_trace_disable(client_id=>'SH');

Setting the client_identifier column lets you enable the tracing of multiple sessions, when several users may be
connecting as the same Oracle user, especially in applications that use connection pools. The client_id_trace_enable procedure collects statistics for all sessions with a specific client ID. Note that the client_id that you must specify doesn’t have to belong to a currently active session. By default, the waits and binds parameters are set to false and you can set the tracing of both waits and binds by adding those parameters when you execute the client_id_ trace_enable procedure:

SQL> exec dbms_monitor.client_id_trace_enable('SH',true,true);
PL/SQL procedure successfully completed.

You can query the DBA_ENABLED_TRACES view to find the status of a trace that you executed with a client identifier. In this view, the column TRACE_TYPE shows the value CLIENT_ID and the PRIMARY_ID shows the value of the client identifier.

SQL> select trace_type, primary_id,waits,binds from dba_enabled_traces;
TRACE_TYPE PRIMARY_ID WAITS BINDS
------------- --------------- -------- --------------
CLIENT_ID SH TRUE TRUE



17.How to Tracing an Instance or a Database
You want to trace the execution of all SQL statements in the entire instance or database.

Use the dbms_monitor.database_trace_enable procedure to trace a specific instance or an entire database.

Issue the following pair of commands to start and stop tracing for an individual instance.
SQL> execute dbms_monitor.database_trace_enable(instance_name=>'instance1');

SQL> execute dbms_monitor.database_trace_disable(instance_name=>'instance1');

You can optionally specify the waits and binds attributes. The following commands enable and disable SQL
tracing at the database level:
SQL> execute dbms_monitor.database_trace_enable();

SQL> execute dbms_monitor.database_trace_disable();

You can also set the sql_trace initialization parameter to true to turn on and turn off SQL tracing, but this
parameter is deprecated. Oracle recommends that you use the dbms_monitor (or the dbms_session) package for SQL tracing.

Obviously, instance-level and database-level SQL tracing is going to impose a serious overhead and may well turn out to be another source of performance problems! It’s possible for background processes to continue writing to their trace files until the trace files reach their maximum size, the directory containing the trace files exhausts all the space allocated to it, or until you bounce the database. This is so because even after you disable this type of tracing, background processes may keep writing to the trace files. You normally don’t ever have to do this—use the session-level tracing instead to identify performance problems. If you must trace an entire instance, because you don’t know from which session a query may be executed, turn off tracing as soon as possible to reduce the overhead.



18. How to Generating an Event 10046 Trace for a Session 
level 12: Specifies the trace level—in this case, it captures both bind and wait information.

To get an Oracle event 10046 trace for a session.

You can get an Oracle event 10046 trace, also called an extended trace, by following these steps:
 

1. Set your trace file identifier.
SQL> alter session set tracefile_identifier='My_Trace';
 

2. Issue the following statement to start the trace.
SQL> alter session set events '10046 trace name context forever, level 12'
 

3. Execute the SQL statement(s) that you want to trace.
SQL> select sum(amount_sold) from sales;
 

4. Turn tracing off with the following command:
SQL> alter session set events '10046 trace name context off';
 

You’ll find the trace dump file in the trace directory that’s specified by the diagnostic_dest parameter
($DIAG_HOME/rdbms/db/inst/trace). You can analyze this trace file with TKPROF or another utility such as the Oracle Trace Analyzer.



 Here’s what the various keywords in the syntax for setting a 10046 trace mean:
set events: Sets a specific Oracle event, in this case, the event 10046
10046: Specifies when an action should be taken
trace: The database must take this action when the event (10046) occurs.
name: Indicates the type of dump or trace
context: Specifies that Oracle should generate a context-specific trace; if you replace context with errorstack, the database will not trace the SQL statement. It dumps the error stack when it hits the 10046 event.


 forever: Specifying the keyword forever tells the database to invoke the action (trace) every time the event (10046) is invoked, until you disable the 10046 trace. If you omit the keyword forever, the action is invoked just once, following which the event is automatically disabled.

level 12: Specifies the trace level—in this case, it captures both bind and wait information.




Level Description
0 The debugging event is disabled.
1 The debugging event is enabled. For each processed database call, the following information is given: SQL statement, response time, service time, number of processed rows, number of logical reads, number of physical reads and writes, execution plan, and little additional information.
Up to 10.2 an execution plan is written to the trace file only when the cursor it is associated with is closed. The execution statistics associated to it are values aggregated over all executions.
As of 11.1 an execution plan is written to the trace file only after the first execution of every cursor. The execution statistics associated to it are the ones of the first execution only.
4 As in level 1, with additional information about bind variables. Mainly, the data type, its precision, and the value used for each execution.
8 As in level 1, plus detailed information about wait time. For each wait experienced during the processing, the following information is given: the name of the wait event, the duration, and a few additional parameters identifying the resource that has been waited for.
16 As in level 1, plus the execution plans information is written to the trace file for each execution. Available as of 11.1 only.
32 As in level 1, but without the execution plans information. Available as of 11.1 only.
64 As in level 1, plus the execution plans information might be written for executions following the first one. The condition is that, since the last write of execution plans information, a particular cursor consumed at least one additional minute of DB time. This level is interesting in two cases. First, when the information about the first execution is not enough for analysing a specific issue. Second, when the overhead of writing the information about every execution (level 16) is too high. Generally available as of 11.2.0.2 only.
In addition to the levels described in the previous table, you can also combine the levels 4 and 8 with every other level greater than 1. For example:
  • Level 12 (4 + 8): simultaneously enable level 4 and level 8.
  • Level 28 (4 + 8 + 16): simultaneously enable level 4, level 8 and level 16.
  • Level 68 (4 + 64): simultaneously enable level 4 and level 64.
If you are using dbms_monitor or dbms_session for enabling extended SQL trace, here is the mapping between the levels and the parameters:
  • Level 4: waits=FALSE, binds=TRUE, plan_stat=’first_execution’
  • Level 8: waits=TRUE, binds=FALSE, plan_stat=’first_execution’
  • Level 16: waits=FALSE, binds=FALSE, plan_stat=’all_executions’
  • Level 32: waits=FALSE, binds=FALSE, plan_stat=’never’
  • Level 64: not available yet


19.How to Generating an Event 10046 Trace for an Instance 

You want to trace a problem SQL query, but you can’t identify the session in advance. You would like to trace all SQL statements executed by the instance.

You can turn on tracing at the instance level with the following alter system command, after connecting to the instance you want to trace.
 

SQL> alter system set events '10046 trace name context forever,level 12';
 

The previous command enables the tracing of all sessions that start after you issue the command—it won’t trace sessions that are already connected.
 

You disable the trace by issuing the following command:
SQL> alter system set events '10046 trace name context off';
 

This command disables tracing for all sessions.

Instance-wide tracing helps in cases where you know a problem query is running, but there’s no way to identify the session ahead of time. Make sure that you enable instance-wide tracing only when you have no other alternative, and turn it off as soon as you capture the necessary diagnostic information. Any instance-wide tracing is going to not only generate very large trace files in a busy environment but also contribute significantly to the system workload.

As in the case of Recipe 10-18, it’s possible for background processes to continue writing to their trace files until the trace files reach their maximum size, the directory containing the trace files exhausts all the space allocated to it, or until you bounce the database. This is so because even after you disable this type of tracing, background processes may keep writing to the trace files.










20.How to Setting a Trace in a Running Session


You want to set a trace in a session, but the session has already started.

Note: A user who phones to ask for help with a long-running query is a good example of a case in which you might want to initiate a trace in a currently executing session. Some business-intelligence queries, for example, run for dozens of minutes, even hours, so there is time to initiate a trace mid-query and diagnose a performance problem.

You can set a trace in a running session using the operating system process ID (SPID), with the help of the oradebug utility. Once you identify the PID of the session you want to trace, issue the following commands to trace the session.

SQL> connect / as sysdba
SQL> oradebug setospid <SPID>
SQL> oradebug unlimit
SQL> oradebug event 10046 trace name context forever,level 12
SQL> oradebug event 10046 trace name context off
 

In the example shown here, we specified Level 12, but as with the 10046 trace you set with the alter session
command, you can specify the lower tracing levels 4 or 8.


The oradebug utility comes in handy when you can’t access the session you want to trace, or when the session has already started before you can set tracing. oradebug lets you attach to the session and start the SQL tracing. If you aren’t sure about the operating system PID (or SPID) associated with an Oracle session, you can find it with the following query.
 

SQL> select p.PID,p.SPID,s.SID
 from v$process p,v$session s
 where s.paddr = p.addr
 and s.sid = &SESSION_ID
 

oradebug is only a facility that allows you to set tracing—it’s not a tracing procedure by itself. The results of the 10046 trace you obtain with oradebug are identical to those you obtain with a normal event 10046 trace command.
 

In the example shown in the “Solution” section, we use the OS PID of the Oracle users. You can also specify the Oracle Process Identifier (PID) to trace a session instead of the OS PID.

SQL> connect / as sysdba
SQL> oradebug setorapid 9834
SQL> oradebug unlimit
SQL> oradebug event 10046 trace name context forever,level 12



In an Oracle RAC environment, as is the case with all other types of Oracle tracing, make sure you connect to the correct instance before starting the trace. As an alternative to using oradebug, you can use the dbms_system.set_sql_trace_in_session procedure to set a trace in a running session. Note that DBMS_SYSTEM is an older package, and the recommended way to trace sessions starting with the Oracle Database 10g release is to use the DBMS_MONITOR package.






21.How to Enabling Tracing in a Session After a Login  -- trigger

want to trace a user’s session, but that session starts executing queries immediately after it logs in.

If a session immediately begins executing a query after it logs in, it doesn’t give you enough time to get the session information and start tracing the session. In cases like this, you can create a logon trigger that automatically starts tracing the session once the session starts. Here is one way to create a logon trigger to set up a trace for sessions created by a specific user:

SQL> create or replace trigger trace_my_user
 after logon on database
 begin
 if user='SH' then
 dbms_monitor.session_trace_enable(null,null,true,true);
 end if;
 end;
SQL> /
Trigger created.


Often, you find it hard to trace session activity because the session already starts executing statements before you can set up the trace. This is especially so in a RAC environment, where it is harder for the DBA to identify the instance and quickly set up tracing for a running session. A logon trigger is the perfect solution for such cases. Note that in a RAC environment, the database generates the trace files in the trace directory of the instance to which a user connected. A logon trigger for tracing sessions is useful for tracing SQL statements issued by a specific user, by setting the trace as soon as the user logs in. From that point on, the database traces all SQL statements issued by that user. Make sure you disable the tracing and drop the logon trigger once you complete tracing the SQL statements you are interested in. Remember to revoke the alter session privilege from the user as well.



22. How to trace the Oracle optimizer's execution path using 10053 

You want to trace the cost-based optimizer (CBO) to examine the execution path for a SQL statement.


You can trace the optimizer’s execution path by setting the Oracle event 10053. Here are the steps.
 

1. Set the trace identifier for the trace file.
SQL> alter session set tracefile_identifier='10053_trace1'
Session altered.

2. Issue the alter session set events statement to start the trace.
SQL> alter session set events '10053 trace name context forever,level 1';
Session altered.

3. Execute the SQL statement whose execution path you want to trace.
SQL> select * from users
2 where user_id=88 and
3 account_status='OPEN'
4 and username='SH';
...

4. Turn the tracing off.
SQL> alter session set events '10053 trace name context off';
Session altered.

You can examine the raw trace file directly to learn how the optimizer went about its business in selecting
the execution plan for the SQL statement.



An event 10053 trace gives you insight into the way the optimizer does its job in selecting what it estimates to be the optimal execution plan for a SQL statement. For example, you may wonder why the optimizer didn’t use an index in a specific case—the event 10053 trace shows you the logic used by the optimizer in skipping that index. The optimizer considers the available statistics for all objects in the query and evaluates various join orders and access paths. The event 10053 trace also reveals all the evaluations performed by the optimizer and how it arrived at the best join order and the best access path to use in executing a query.
You can set either Level 1 or Level 2 for the event 10053 trace. Level 2 captures the following types of information:


• Column statistics
• Single access paths
• Table joins considered by the optimizer
• Join costs
• Join methods considered by the optimizer
 

A Level 1 trace includes all the foregoing, plus a listing of all the default initialization parameters used by the
optimizer. You’ll also find detailed index statistics used by the optimizer in determining the best execution plan. The trace file captures the amazing array of statistics considered by the cost optimizer and explains how the CBO creates the execution plan. Here are some of the important things you’ll find in the CBO trace file.


• List of all internal optimizer-related initialization parameters
• Peeked values of the binds in the SQL statement
• Final query after optimizer transformations
• System statistics (CPUSPEEDNW, IOTFRSPEED, IOSEEKTIM, MBRC)
• Access path analysis for all objects in the query
• Join order evaluation


Unlike a raw 10046 event trace file, a 10053 event trace file is quite easy (and interesting) to read. You must
understand here that a 10053 trace will be generated only when a hard parse is required. Bear in mind that the trace files might at times be hard to follow when several tables with multiple indexes are analyzed during optimization.  Also, the contents of a 10053 event trace file are subject to change from one Oracle Database release to the next.


Here are key excerpts from our trace file. The trace file shows the cost-based query transformations applied by the optimizer:
OBYE: Considering Order-by Elimination from view SEL$1 (#0)
OBYE: OBYE performed.


In this case, the optimizer eliminated the order by clause in our SQL statement. After performing all its
transformations, the optimizer arrives at the “final query after transformations,” which is shown here:
 

select channel_id,count(*)
from sh.sales
group by channel_id


Next, the output file shows the access path analysis for each of the tables in your query.



Access path analysis for SALES
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for SALES[SALES]
Table: SALES Alias: SALES
Card: Original: 918843.000000 Rounded: 918843 Computed: 918843.00 Non Adjusted: 918843.00
Access Path: TableScan
Cost: 495.47 Resp: 495.47 Degree: 0
Cost_io: 481.00 Cost_cpu: 205554857
Resp_io: 481.00 Resp_cpu: 205554857
Access Path: index (index (FFS))
Index: SALES_CHANNEL_BIX
resc_io: 42.30 resc_cpu: 312277
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 42.32 Resp: 42.32 Degree: 1
Cost_io: 42.30 Cost_cpu: 312277
Resp_io: 42.30 Resp_cpu: 312277
****** trying bitmap/domain indexes ******
Access Path: index (FullScan)
Index: SALES_CHANNEL_BIX
resc_io: 75.00 resc_cpu: 552508
ix_sel: 1.000000 ix_sel_with_filters: 1.000000
Cost: 75.04 Resp: 75.04 Degree: 0
Access Path: index (FullScan)
Index: SALES_CHANNEL_BIX
resc_io: 75.00 resc_cpu: 552508
ix_sel: 1.000000 ix_sel_with_filters: 1.000000
Cost: 75.04 Resp: 75.04 Degree: 0
Bitmap nodes:
Used SALES_CHANNEL_BIX
Cost = 75.038890, sel = 1.000000
Access path: Bitmap index - accepted
Cost: 75.038890 Cost_io: 75.000000 Cost_cpu: 552508.000000 Sel: 1.000000
Believed to be index-only

******** Begin index join costing ********
******** End index join costing ********
Best:: AccessPath: IndexFFS
Index: SALES_CHANNEL_BIX
Cost: 42.32 Degree: 1 Resp: 42.32 Card: 918843.00 Bytes: 0
In this case, the optimizer evaluates various access paths and shows the optimal access path as an Index Fast
Full Scan (IndexFFS).
The optimizer then considers various permutations of join orders and estimates the cost for each join order it
considers:
Considering cardinality-based initial join order.
Join order[1]: SALES[SALES]#0
GROUP BY sort
GROUP BY adjustment factor: 1.000000
Total IO sort cost: 0 Total CPU sort cost: 834280255
Best so far: Table#: 0 cost: 101.0459 card: 918843.0000 bytes: 2756529
Number of join permutations tried: 1
GROUP BY adjustment factor: 1.000000
GROUP BY cardinality: 4.000000, TABLE cardinality: 918843.000000
Total IO sort cost: 0 Total CPU sort cost: 834280255
Best join order: 1
Cost: 101.0459 Degree: 1 Card: 918843.0000 Bytes: 2756529




As our brief review of the 10053 trace output shows, you can get answers to puzzling questions such as
why exactly the optimizer chose a certain join order or an access path, and why it ignored an index. The answers are
all there!







23. How to Generating Automatic Oracle Error Traces


You want to create an automatic error dump file when a specific Oracle error occurs


You can create error dumps to diagnose various problems in the database by specifying the error number in a hanganalyze or systemstate command. For example, diagnosing the causes for deadlocks is often tricky. You can ask the database to dump a trace file when it hits the ORA-00060: Deadlock detected error. To do this, specify the event number 60 with the hanganalyze or the systemstate command:
 

SQL> alter session set events '60 trace name hanganalyze level 4';
Session altered.
 

SQL> alter session set events '60 trace name systemstate level 266';
Session altered.

Both of these commands will trigger the automatic dumping of diagnostic data when the database next
encounters the ORA-00060 error. You can use the same technique in an Oracle RAC database. For example, you can issue the following command to generate automatic hanganalyze dumps:
 

SQL>alter session set events '60 trace name hanganalyze_global level 4';
 

This alter session statement invokes the hanganalyze command in any instance in which the database
encounters the ORA-00060 error.
Although we showed how to set the error trace event at the session level, it would be better, for some hard to- catch unfortunate conditions, to set this trace event at the system level. It isn’t always easy to reproduce a problem in one's session


Setting event numbers for an error will ensure that when the specified error occurs the next time, Oracle automatically dumps the error information for you. This comes in very handy when you’re diagnosing an error that occurs occasionally and getting a current systemstate dump or a hanganalyze dump is unhelpful. Some events such as deadlocks have a text alias, in which case you can specify the alias instead of the error number. For the ORA-00060 error, the text alias is deadlock, and so you can issue the following command for tracing the error:
 

SQL> alter session set events 'deadlock trace name systemstate level 266';
Session altered.




24.How to Tracing a Background Process


25.How to Enabling Oracle Listener Tracing


26. How to Setting Archive Tracing for Data Guard