11.2.0.1.4 to 11.2.0.2
PSU patch --
Patch Set Updates (PSU) are the same cumulative patches
that include both the security fixes
and priority fixes. The key
with PSUs is they are minor version upgrades (e.g., 11.2.0.1.4 to 11.2.0.1.2).
Once a PSU is applied, only PSUs
can be applied in future quarters until the database is upgraded to a new base version.
why : Before upgrading Oracle Grid Infrastructure
from 11.2.0.1 to 11.2.0.2,apply PSU2 (9655006)
Preliminary
Steps
1)Enable Blackout on the Hosts.
2)Clean the CRON:
On all nodes
crontab -l > /net/dba/software/11.2.0.2/logs/CRON_`hostname -a`.log
crontab < /dev/null
1)Enable Blackout on the Hosts.
2)Clean the CRON:
On all nodes
crontab -l > /net/dba/software/11.2.0.2/logs/CRON_`hostname -a`.log
crontab < /dev/null
3)Backup the rac Services Information:
srvctl status service -d gqems01d > /net/dba/software/11.2.0.2/logs/srvctlstatus.log
srvctl config service -d gqems01d > /net/dba/software/11.2.0.2/logs/srvctlconfig.log
Comment out BCP monitoring:
Login to sp1-oemdb-001.ysm.corp.sp1.sample.com
vi /home/oracle/vincentj/shell/new_db_list.lst
Comment out as needed.
Upgrade
OPatch Version As Root and change
ownership as oracle
mv /oracle/product/11.2/OPatch /oracle/product/11.2/OPatch_b411202
cp /net/dba/software/11.2.0.2/p6880880_112000_Linux-x86-64.zip /oracle/product/11.2
unzip /oracle/product/11.2/p6880880_112000_Linux-x86-64.zip
mv /oracle/product/11.2/OPatch /oracle/product/11.2/OPatch_b411202
cp /net/dba/software/11.2.0.2/p6880880_112000_Linux-x86-64.zip /oracle/product/11.2
unzip /oracle/product/11.2/p6880880_112000_Linux-x86-64.zip
As
Root :
mv /oragrid/product/11.2/OPatch /oragrid/product/11.2/OPatch_b411202
cp /net/dba/software/11.2.0.2/p6880880_112000_Linux-x86-64.zip /oragrid/product/11.2
cd /oragrid/product/11.2
unzip p6880880_112000_Linux-x86-64.zip
chown -R oracle:dba OPatch
mv /oragrid/product/11.2/OPatch /oragrid/product/11.2/OPatch_b411202
cp /net/dba/software/11.2.0.2/p6880880_112000_Linux-x86-64.zip /oragrid/product/11.2
cd /oragrid/product/11.2
unzip p6880880_112000_Linux-x86-64.zip
chown -R oracle:dba OPatch
Apply
PSU 2 on 11.2.0.1
Why PSU2 ?
Before upgrading Oracle Grid Infrastructure from 11.2.0.1 to 11.2.0.2,apply PSU2 (9655006)
OR patch GI home with 9413827 and 9706490 patches.
In case of DEVEMS DB, we are on 11.2.0.1 PSU1. Better option is to apply PSU 2. If GI home is patched with 9413827 and 9706490 patches, PSU1 has to be rolled back as patch 9413827 conflicts with it and even Patches 9413827 and 9706490 conflict each other.
Pre-checks
--verifying the recent opatch version
%source /home/oracle/.zshrc
%opatch version --- (./opatch version )(output : 11.2.0.1.4)
source /home/oracle/.zshrc.asm
%opatch version (output : 11.2.0.1.4)
%opatch lsinventory -detail -oh /oracle/product/11.2
%opatch lsinventory -detail -oh /oragrid/product/11.2
cd /net/dba/software/11.2.0.2/PSU
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./9655006 -oh /oracle/product/11.2
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./9655006 -oh /oragrid/product/11.2
%source /home/oracle/.zshrc
%opatch version --- (./opatch version )(output : 11.2.0.1.4)
source /home/oracle/.zshrc.asm
%opatch version (output : 11.2.0.1.4)
%opatch lsinventory -detail -oh /oracle/product/11.2
%opatch lsinventory -detail -oh /oragrid/product/11.2
cd /net/dba/software/11.2.0.2/PSU
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./9655006 -oh /oracle/product/11.2
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir ./9655006 -oh /oragrid/product/11.2
Patching
GI Home and DB home
Verify
that CRS is up and running:
cd $ORACLE_GRID/bin
./crsctl status resource -t
cd $ORACLE_GRID/bin
./crsctl status resource -t
As
ROOT user run opatch auto as follows: You MUST run the command in a rolling fashion on each node of the cluster.
1.
To regenerate an OCM response file run : "emocmrsp"
cd /home/oracle/product/11.2.0.2
When prompted for an email address, either leave blank or provide an email address. The file will be created in the directory from which the above command was executed.
cd /home/oracle/product/11.2.0.2
When prompted for an email address, either leave blank or provide an email address. The file will be created in the directory from which the above command was executed.
Command
to Create the ocm.rsp file: (OCM is Oracle Configuration Manager)
#/home/oragrid/product/11.2.0.2/OPatch/ocm/bin/emocmrsp
ls -ltr /home/oracle/product/11.2.0.2/ocm.rsp
ls -ltr /home/oracle/product/11.2.0.2/ocm.rsp
Apply
the Patch
#/oragrid/product/11.2/OPatch/opatch auto /net/dba/software/11.2.0.2/PSU
#/oragrid/product/11.2/OPatch/opatch auto /net/dba/software/11.2.0.2/PSU
When pompted ocm.rsp location:
OPatch is bundled with OCM, Enter the absolute OCM response file path:
Enter the patch of the ocm.rsp file e.g.
/home/oracle/product/11.2.0.2/ocm.rsp
OPatch is bundled with OCM, Enter the absolute OCM response file path:
Enter the patch of the ocm.rsp file e.g.
/home/oracle/product/11.2.0.2/ocm.rsp
Sample
Output: just watch the output
Executing /usr/bin/perl /oragrid/product/11.2/OPatch/crs/patch112.pl -patchdir /net/dba/software/11.2.0.2 -patchn PSU -paramfile /home/oragrid/product/11.2.0.2/crs/install/crsconfig_params
opatch auto log file location is /home/oragrid/product/11.2/OPatch/crs/../../cfgtoollogs/opatchauto2015-03-16_20-49-48.log
Detected Oracle Clusterware install
Using configuration parameter file: /home/oragrid/product/11.2.0.2/crs/install/crsconfig_params
OPatch is bundled with OCM, Enter the absolute OCM response file path: /home/oracle/product/11.2.0.2/ocm.rsp <-- Response file path
Executing /usr/bin/perl /oragrid/product/11.2/OPatch/crs/patch112.pl -patchdir /net/dba/software/11.2.0.2 -patchn PSU -paramfile /home/oragrid/product/11.2.0.2/crs/install/crsconfig_params
opatch auto log file location is /home/oragrid/product/11.2/OPatch/crs/../../cfgtoollogs/opatchauto2015-03-16_20-49-48.log
Detected Oracle Clusterware install
Using configuration parameter file: /home/oragrid/product/11.2.0.2/crs/install/crsconfig_params
OPatch is bundled with OCM, Enter the absolute OCM response file path: /home/oracle/product/11.2.0.2/ocm.rsp <-- Response file path
Issue
response "YES" accordingly when prompted to.
After the patch is applied on both nodes, from
Any Node run @catbundle.sql from any one of the node
determines the last bundle applied to the database and
executes only the scripts in the patch that have changed since the last bundle
patch was applied.
cd $ORACLE_HOME/rdbms/admin
$sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> @catbundle.sql psu apply
SQL> QUIT
Check
the Log for errors.
Patch Verification:
[oracle@gq1-devdnadb-001]% source /oracle/.zshrc
[oracle@gq1-devdnadb-001]% $ORACLE_HOME/OPatch/opatch lsinventory|grep "applied"
[oracle@gq1-devdnadb-001]% source /oracle/.zshrc.asm
[oracle@gq1-devdnadb-001]% $ORACLE_GRID/OPatch/opatch lsinventory|grep "applied"
Expected Output: All existing patches will be rolled back, hence only this patch will exist.
Patch Verification:
[oracle@gq1-devdnadb-001]% source /oracle/.zshrc
[oracle@gq1-devdnadb-001]% $ORACLE_HOME/OPatch/opatch lsinventory|grep "applied"
[oracle@gq1-devdnadb-001]% source /oracle/.zshrc.asm
[oracle@gq1-devdnadb-001]% $ORACLE_GRID/OPatch/opatch lsinventory|grep "applied"
Expected Output: All existing patches will be rolled back, hence only this patch will exist.
[oracle@gq1-emsdb-001]~%
$ORACLE_HOME/OPatch/opatch lsinventory|grep "applied"
Patch 11664046 : applied on Tue Mar 15 14:33:57 PDT 2015
Patch 11664046 : applied on Tue Mar 15 14:33:57 PDT 2015
GRID UPGRADE: setup on new directory location
Create the necessary directories. Directory
structure should be present to avoid errors
mkdir -p /oragrid/product/11.2.0.2
mkdir -p /oracle/product/11.2.0.2
chown oracle:dba /oragrid/product/11.2.0.2
chown oracle:dba /oracle/product/11.2.0.2
mkdir -p /oragrid/product/11.2.0.2
mkdir -p /oracle/product/11.2.0.2
chown oracle:dba /oragrid/product/11.2.0.2
chown oracle:dba /oracle/product/11.2.0.2
Run
./runInstaller -silent -waitforcompletion -responseFile /net/dba/software/11.2.0.2/response_files/11202_grid.rsp -force –ignoreSysPrereqs
./runInstaller -silent -waitforcompletion -responseFile /net/dba/software/11.2.0.2/response_files/11202_grid.rsp -force –ignoreSysPrereqs
what is fixup script?
During
an installation, OUI detects when the
minimum requirements for an installation are not met, and creates shell scripts, called fixup scripts,
to finish incomplete system configuration steps. If OUI detects an incomplete task, then it generates fixup scripts (
runfixup.sh
). You must
run these scripts as root
.
run
runfixup.sh to fix pre-requisites. For 11201-11202,the
ideal o/p would be "Nothing to Fix!!"
PS note
: Before running rootupgrade.sh,
Node 2:
[root@gq1-devdnadb-002 tmp]# cd /tmp/CVU_11.2.0.1.0_oracle
[root@gq1-devdnadb-002 tmp]# ./runfixup.sh
Upgrade :
As Root user, On each Node: :expected output is "Nothing to Fix!!"
[root@gq1-devdnadb-002 tmp]# cd /tmp/CVU_11.2.0.1.0_oracle
[root@gq1-devdnadb-002 tmp]# ./runfixup.sh
Upgrade :
As Root user, On each Node: :expected output is "Nothing to Fix!!"
Run rootupgrade.sh only one node
Node 1 :
script /net/dba/software/11.2.0.2/logs/rootupgrade_`hostname -a`.log
time /oragrid/product/11.2.0.2/rootupgrade.sh
script /net/dba/software/11.2.0.2/logs/rootupgrade_`hostname -a`.log
time /oragrid/product/11.2.0.2/rootupgrade.sh
Check the upgrade Log: Sample
format
/oragrid/product/11.2.0.2/install/root_gq1-devemsdb-001.data.gq1.sample.com_2015-11-16_15-40-26.log
/oragrid/product/11.2.0.2/install/root_gq1-devemsdb-001.data.gq1.sample.com_2015-11-16_15-40-26.log
Run Cluvfy --used to trace and troubleshoot a RAC CRS error post installation
cd /net/dba/software/11.2.0.2/grid
cd /net/dba/software/11.2.0.2/grid
what is fixup script?
During
an installation, OUI detects when the
minimum requirements for an installation are not met, and creates shell scripts, called fixup scripts,
to finish incomplete system configuration steps. If OUI detects an incomplete task, then it generates fixup scripts (
runfixup.sh
). You must
run these scripts as root
.
./runcluvfy.sh
stage -pre crsinst -upgrade -n node1,node2,...etc.
-rolling -src_crshome
/oragrid/product/11.2 -dest_crshome /oragrid/product/11.2.0.3
-dest_version 11.2.0.3.0 -fixup -fixupdir /tmp -verbose > /net/dba/software/11.2.0.2/logs/cluvfy_post.log
Run the fixup.sh on each node as
root user and this script is generated
Node 1:
[root@gq1-devdnadb-001 tmp]# cd /tmp/CVU_11.2.0.2.0_oracle
[root@gq1-devdnadb-001 tmp]# cp fixup/gq1-devdnadb-001/fixup.* . -- copy to all nodes
[root@gq1-devdnadb-001 tmp]# ./runfixup.sh
Node 1:
[root@gq1-devdnadb-001 tmp]# cd /tmp/CVU_11.2.0.2.0_oracle
[root@gq1-devdnadb-001 tmp]# cp fixup/gq1-devdnadb-001/fixup.* . -- copy to all nodes
[root@gq1-devdnadb-001 tmp]# ./runfixup.sh
Copy the fixup response files to
other nodes run one by one and run as
root user on all nodes
[oracle@gq1-devdnadb-001]/tmp% scp fixup.* gq1-devdnadb-002:/tmp/CVU_11.2.0.1.0_oracle/
[oracle@gq1-devdnadb-001]/tmp% scp fixup.* gq1-devdnadb-002:/tmp/CVU_11.2.0.1.0_oracle/
Install
11.2.0.2 Database
Binaries :
cd /net/dba/software/11.2.0.2/database
./runInstaller -silent -waitforcompletion -responseFile /net/dba/software/11.2.0.2/response_files/11202_db.rsp –force
cd /net/dba/software/11.2.0.2/database
./runInstaller -silent -waitforcompletion -responseFile /net/dba/software/11.2.0.2/response_files/11202_db.rsp –force
On
each Node As Root :
/oracle/product/11.2.0.2/root.sh
/oracle/product/11.2.0.2/root.sh
DB
upgrade:
SQL> spool /net/dba/software/11.2.0.2/logs/utlu112i.log
SQL> @?/rdbms/admin/utlu112i.sql <--Report updated registry version and upgrade time -- List status of upgraded component
SQL> spool /net/dba/software/11.2.0.2/logs/utlrp_precatupgrd.log
SQL> @?/rdbms/admin/utlrp.sql <--Recompiling invalids (collect the invalid)
SQL> @?/rdbms/admin/utlu112i.sql <--Report updated registry version and upgrade time -- List status of upgraded component
SQL> spool /net/dba/software/11.2.0.2/logs/utlrp_precatupgrd.log
SQL> @?/rdbms/admin/utlrp.sql <--Recompiling invalids (collect the invalid)
cd /net/dba/software/11.2.0.2/response_files
SQL> alter session set nls_language='American';
SQL> @dbupgdiag.sql /net/dba/software/11.2.0.2/logs/ <--diagnose the status of the database either before (or) after upgrade ( pre issues)
SQL> alter session set nls_language='American';
SQL> @dbupgdiag.sql /net/dba/software/11.2.0.2/logs/ <--diagnose the status of the database either before (or) after upgrade ( pre issues)
Copy important files from old home
to new 11g home:
1. init/spfile :
mv /oracle/product/11.2/dbs/initgqems01d1.ora /oracle/product/11.2.0.2/dbs/initgqems01d1.ora
1. init/spfile :
mv /oracle/product/11.2/dbs/initgqems01d1.ora /oracle/product/11.2.0.2/dbs/initgqems01d1.ora
2. password file: (On every node) -- unlink old password file and link new password file
cd /oracle/product/11.2/dbs
unlink orapwgqems01d1
cd oracle/product/11.2.0.2/dbs
ln -s /u05/ora_dbs/orapwgqems01d orapwgqems01d1
3. tnsnames.ora
mv /oracle/product/11.2/network/admin/tnsnames.ora /oracle/product/11.2.0.2/network/admin/tnsnames.ora
Need to change the parameter If
Upgrading from 11.2.0.1:
Modify CLUSTER_DATABASE=" FALSE" in the pfile.
Modify CLUSTER_DATABASE=" FALSE" in the pfile.
Case 1: If using SPfile
alter system set CLUSTER_DATABASE=FALSE scope=spfile sid='*';
Case 2:
If using P file - Depending on the configuration, edit individual Pfiles or the ifile.
srvctl stop database -d gqems01d
vi /u05/ora_dbs/initgqems01d.ora
Modify the Parameter to False
If Upgrading from 10.2.0.4:
Modify the pfile :
*.cluster_database=false
*.compatible='11.2.0'
*.diagnostic_dest=/oracle
*.sec_case_sensitive_logon=FALSE
Comment out following as they are deprecated with 11g:
#*.audit_file_dest
#*.background_dump_dest
#*.core_dump_dest
#*.user_dump_dest
srvctl stop database -d gqems01d
vi /u05/ora_dbs/initgqems01d.ora
Modify the Parameter to False
If Upgrading from 10.2.0.4:
Modify the pfile :
*.cluster_database=false
*.compatible='11.2.0'
*.diagnostic_dest=/oracle
*.sec_case_sensitive_logon=FALSE
Comment out following as they are deprecated with 11g:
#*.audit_file_dest
#*.background_dump_dest
#*.core_dump_dest
#*.user_dump_dest
4.
Database Upgrade From 11.2.0.1 to 11.2.0.2 :: DB
upgrade --happens as single instance database.
Startup
Upgrade the database and run Catupgrade.sql on single instance database then again need to check for utlrp.sql for invalidation
Startup Upgrade the database and
run Catupgrade.sql on only one node.
source
/oracle/.zshrc11202
SQL>sqlplus "/as sysdba"
SQL>sqlplus "/as sysdba"
SQL>startup upgrade
pfile='/oracle/product/11.2.0.2/dbs/initgqems01d1.ora'
SQL> set time on timing on echo on
SQL> spool /net/dba/software/11.2.0.2/logs/dbupgrade.log
SQL> set time on timing on echo on
SQL> spool /net/dba/software/11.2.0.2/logs/dbupgrade.log
SQL> @?/rdbms/admin/catupgrd.sql ----
OBSOLETE in 12c
replaced by catctl.pl
SQL> spool off
SQL> exit
SQL> spool /net/dba/software/11.2.0.2/logs/utlrp_postcatupgrd.log
SQL> @?/rdbms/admin/utlrp.sql <--Recompiling invalids
SQL> spool off
SQL> exit
SQL> spool /net/dba/software/11.2.0.2/logs/utlrp_postcatupgrd.log
SQL> @?/rdbms/admin/utlrp.sql <--Recompiling invalids
Post Upgrade Tasks
-----------------------------------------------
Startup the database in normal mode.
SQL > startup
Execute utlu121s.sql --
List status of upgraded components.
SQL> @?/rdbms/admin/utlu121s.sql
Execute catuppst.sql --
upgrade those components which don’t require db in upgrade mode.
SQL> @?/rdbms/admin/catuppst.sql
Execute utlrp.sql
-- Recompile the invalid
objects.
SQL> @?/rdbms/admin/utlrp.sql
Execute
utluiobj.sql -- list invalid objects after upgrade.
SQL> @?/rdbms/admin/utluiobj.sql
Startup the db in upgrade mode to
update the Time zone version to 18 as recommended in preupgrade_fixups.sql.
SQL> STARTUP UPGRADE
SQL> SELECT PROPERTY_NAME,
SUBSTR(property_value, 1, 30) value
FROM
DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
PROPERTY_NAME
VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION 1 4
DST_SECONDARY_TT_VERSION 0
DST_UPGRADE_STATE
NONE
SQL> alter session set
"_with_subquery"=materialize;
SQL> alter session set
"_simple_view_merging"=TRUE;
SQL> exec
DBMS_DST.BEGIN_PREPARE(18)
SQL> SELECT PROPERTY_NAME,
SUBSTR(property_value, 1, 30) value
FROM
DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
PROPERTY_NAME
VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION 14
DST_SECONDARY_TT_VERSION 18
DST_UPGRADE_STATE
PREPARE
SQL> TRUNCATE TABLE
SYS.DST$TRIGGER_TABLE;
SQL> TRUNCATE TABLE
sys.dst$affected_tables;
SQL> TRUNCATE TABLE
sys.dst$error_table;
SQL> set serveroutput on
SQL> BEGIN
DBMS_DST.FIND_AFFECTED_TABLES
(affected_tables => 'sys.dst$affected_tables',
log_errors
=> TRUE,
log_errors_table => 'sys.dst$error_table');
END;
/
SQL> SELECT * FROM
sys.dst$affected_tables;
SQL> SELECT * FROM
sys.dst$error_table;
SQL> EXEC DBMS_DST.END_PREPARE;
-------------------------
Upgrade Timezone version
-------------------------
SQL> purge dba_recyclebin;
SQL> alter session set
"_with_subquery"=materialize;
SQL> alter session set
"_simple_view_merging"=TRUE;
SQL> EXEC
DBMS_DST.BEGIN_UPGRADE(18);
SQL> SELECT PROPERTY_NAME,
SUBSTR(property_value, 1, 30) value
FROM
DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
PROPERTY_NAME
VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION
18
DST_SECONDARY_TT_VERSION 14
DST_UPGRADE_STATE
UPGRADE
SQL> SELECT OWNER, TABLE_NAME,
UPGRADE_IN_PROGRESS FROM ALL_TSTZ_TABLES where UPGRADE_IN_PROGRESS='YES';
11 rows
selected. ----- Number may be different
SQL> shutdown immediate
SQL> startup
SQL> alter session set
"_with_subquery"=materialize;
SQL> alter session set
"_simple_view_merging"=TRUE;
SQL> set serveroutput on
VAR numfail
number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel
=> TRUE,
log_errors
=> TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'||
:numfail);
END;
/
Table list:
"IX"."AQ$_STREAMS_QUEUE_TABLE_S"
Number of failures: 0
Table list:
"IX"."AQ$_STREAMS_QUEUE_TABLE_L"
Number of failures: 0
Table list:
"IX"."AQ$_ORDERS_QUEUETABLE_S"
Number of failures: 0
Table list:
"IX"."AQ$_ORDERS_QUEUETABLE_L"
Number of failures: 0
Table list:
"GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_S"
Number of failures: 0
Table list:
"GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_L"
Number of failures: 0
Table list: "APEX_040200"."WWV_FLOW_WORKSHEET_NOTIFY"
Number of failures: 0
Table list:
"APEX_040200"."WWV_FLOW_FEEDBACK_FOLLOWUP"
Number of failures: 0
Table list:
"APEX_040200"."WWV_FLOW_FEEDBACK"
Number of failures: 0
Table list:
"APEX_040200"."WWV_FLOW_DEBUG_MESSAGES2"
Number of failures: 0
Table list:
"APEX_040200"."WWV_FLOW_DEBUG_MESSAGES"
Number of failures: 0
Failures:0
SQL> VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/
An upgrade window has been
successfully ended.
Failures:0
SQL> SELECT PROPERTY_NAME,
SUBSTR(property_value, 1, 30) value
FROM
DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
PROPERTY_NAME
VALUE
------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION 18
DST_SECONDARY_TT_VERSION 0
DST_UPGRADE_STATE
NONE
SQL> SELECT * FROM
v$timezone_file;
FILENAME
VERSION CON_ID
-------------------- ----------
----------
timezlrg_18.dat
18 0
If registry is not updated update
it.
SQL> select TZ_VERSION from
registry$database;
TZ_VERSION
----------
14
SQL> update registry$database set
TZ_VERSION = (select version FROM v$timezone_file);
1 row updated.
SQL> select TZ_VERSION from
registry$database;
TZ_VERSION
----------
18
SQL> SELECT value$ FROM
sys.props$ WHERE NAME = 'DST_PRIMARY_TT_VERSION';
VALUE$
--------------------------------------------------------------------------------
18
Reset compatible parameter to 12.1.0
Startup the db in normal mode.
4.
Now we can convert the single instance upgrades database to clustered database
Remove all services running from
the old home.
source /oracle/.zshrc
srvctl config service -d gqems01d
srvctl status service -d gqems01d
source /oracle/.zshrc
srvctl config service -d gqems01d
srvctl status service -d gqems01d
Remove services and database
registerd from the cluster from old home
srvctl remove service -d gqems01d -s test.svc -f
/oracle/product/11.2/bin/srvctl remove database -d gqems01d
srvctl remove service -d gqems01d -s test.svc -f
/oracle/product/11.2/bin/srvctl remove database -d gqems01d
Adding newly installed
11.2.0.2 binaries home/
instances/database/services with cluster
source
/oracle/.zshrc11202
srvctl add database -d gqems01d -o /oracle/product/11.2.0.2
srvctl add instance -d gqems01d -i gqems01d1 -n gq1-devemsdb-001
srvctl add instance -d gqems01d -i gqems01d2 -n gq1-devemsdb-002
srvctl add database -d gqems01d -o /oracle/product/11.2.0.2
srvctl add instance -d gqems01d -i gqems01d1 -n gq1-devemsdb-001
srvctl add instance -d gqems01d -i gqems01d2 -n gq1-devemsdb-002
Adding services:
srvctl add service -d gqems01d -s SNAME -r gqems01d1,gqems01d2
srvctl add service -d gqems01d -s SNAME -r gqems01d1,gqems01d2
Shutdown the database,
change
the cluster_database parameter to "TRUE" on both the nodes and
start the database using srvctl
srvctl start database –d gqems01d
srvctl status db -d gqems01d
srvctl start database –d gqems01d
srvctl status db -d gqems01d
Check Invalid Objects: (The count
should match with the earlier result of utlrp.sql)
col comp_name format A40
col status format A15
col version format A15
SELECT comp_name, status, version FROM dba_registry;
col comp_name format A40
col status format A15
col version format A15
SELECT comp_name, status, version FROM dba_registry;
col
owner format A20
col object_name format A40
select owner, object_type, object_name, status from dba_objects where status!='VALID' order by owner, object_type;
col object_name format A40
select owner, object_type, object_name, status from dba_objects where status!='VALID' order by owner, object_type;
Move the env files to their correct
names:
mv /oracle/.zshrc11202 /oracle/.zshrc
mv /oracle/.zshrc.asm11202 /oracle/.zshrc.asm11202
mv /oracle/.zshrc11202 /oracle/.zshrc
mv /oracle/.zshrc.asm11202 /oracle/.zshrc.asm11202
Since
we did a manual DB upgrade, remember to check following files on all nodes post
upgrade to ascertain new homes are reflected correctly:
1. /etc/oratab
2. /oracle/oraInventory/ContentsXML/inventory.xml
1. /etc/oratab
2. /oracle/oraInventory/ContentsXML/inventory.xml
Setup
the CRON.
crontab < /net/dba/software/11.2.0.2/logs/CRON_`hostname -a`.log
crontab < /net/dba/software/11.2.0.2/logs/CRON_`hostname -a`.log
OEM
Targets (Listener/DB) would have to be reconfigured to reflect the new home
location
Restart the agent on all nodes.
Remove Blackout.
Changes to Monitoring scripts:
Besides, any home grown shell script that has Oracle Home reference would need to be updated.
Eg: Binary backup script should reflect the new home and the old home backup should be deleted from /net/dba/backup/`hostname -a` directory.
Restart the agent on all nodes.
Remove Blackout.
Changes to Monitoring scripts:
Besides, any home grown shell script that has Oracle Home reference would need to be updated.
Eg: Binary backup script should reflect the new home and the old home backup should be deleted from /net/dba/backup/`hostname -a` directory.
Patch 11664046 to avoid switchover
issues (Rolling Patch) -- opatch installation steps
cd /net/dba/software/11.2.0.2/PSU/BCP
unzip p11664046_112020_Linux-x86-64.zip
Upgrade
the OPatch version again to 11.2.0.1.4 as the new binaries will have OPatch on
11.2.0.1.1
Follow the steps as above under "Upgrade OPatch Version" to upgrade to the latest OPatch.
Follow the steps as above under "Upgrade OPatch Version" to upgrade to the latest OPatch.
Verify the OPatch version:
/oracle/product/11.2.0.2/OPatch/opatch version
/oragrid/product/11.2.0.2/OPatch/opatch version
/oracle/product/11.2.0.2/OPatch/opatch version
/oragrid/product/11.2.0.2/OPatch/opatch version
Verify that the inventory is clean
on all nodes:
/oragrid/product/11.2.0.2/OPatch/opatch lsinventory
/oragrid/product/11.2.0.2/OPatch/opatch lsinventory
Apply the opatch in rolling fashion
on all nodes:
cd 11664046
%opatch apply
Verify the patch is applied:
/oracle/product/11.2.0.2/OPatch/opatch lsinventory|grep "applied"
/oragrid/product/11.2.0.2/OPatch/opatch lsinventory|grep "applied"
Verify that MRP is applying logs.
cd 11664046
%opatch apply
Verify the patch is applied:
/oracle/product/11.2.0.2/OPatch/opatch lsinventory|grep "applied"
/oragrid/product/11.2.0.2/OPatch/opatch lsinventory|grep "applied"
Verify that MRP is applying logs.