Add node
1. Run the cluster verify untility
and fix all the issues.
cluvfy
stage -pre nodeadd -n Node2 -fixup -verbose
2. If the logfiles reside in
ORACLE_HOME, then its better to purge them before the activity.
3. Run addNode.sh to copy the Grid
Home.
In my env, ‘oracle’ is the owner for both GRID and RDBMS homes. So i ran the command as ‘oracle’ user.
In my env, ‘oracle’ is the owner for both GRID and RDBMS homes. So i ran the command as ‘oracle’ user.
export
ORACLE_HOME=/u01/app/11.2.0/grid
export
PATH=$ORACLE_HOME/bin:$PATH
/u01/app/11.2.0/grid/oui/bin/addNode.sh
-silent CLUSTER_NEW_NODES={Node2} CLUSTER_NEW_VIRTUAL_HOSTNAMES={Node2-vip}
The above might fail for some checks
and if they are ignorable set the below and run addNode.sh again.
4. run root .sh on "new node "
[root@Node2 ~]# /u01/app/11.2.0/grid/root.sh
5. Perform post check from Node2:
[oracle@Node2]$ /u01/app/11.2.0/grid/bin/cluvfy stage -post nodeadd -n Node2
6. Perform pre check for database from Node2
[oracle@Node2]$ /u01/app/11.2.0/grid/bin/cluvfy stage -pre dbinst -n Node2 -r 11gR2
7. Perform addNode.sh for RDBMS on 1st node.
/u01/app/oracle/product/11.2..3/db_1/oui/bin/addNode.sh -silent CLUSTER_NEW_NODES={Node2}The above might fail for some checks and if they are ignorable set the below and run addNode.sh again.
export IGNORE_PREADDNODE_CHECKS=Y
8. Now that you have both Grid and RDBMS on the Node2
9. Modify the spfile and keep it in a shared location.
10. Let the Local_listener be pointed to the individual listeners.
Inst1.local_listener='DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=Node1-vip)(PORT=1521))))'
Inst2.local_listener='DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=Node2-vip)(PORT=1521))))'
*.remote_listener=test-scan
11. Set up the env variables in the Node2
12. Start the instance on Node2
13. copy the tns entries
Delete node from cluster :
12cr1
1, if required move the ocr back to the non-deleteing node
[grid@gract3 ~]$ ocrconfig -showbackup
Ensure that all the instances are registered in the default CRS Listener.
[grid@gract3 ~]$ lsnrctl status LISTENER_SCAN2
If there is a listener in the Oracle Home on the RAC node that you are deleting, you must disable and stop it before deleting the Oracle RAC software
$ srvctl disable listener -l <listener_name> -n <NodeToBeDeleted>
$ srvctl stop listener -l <listener_name> -n <NodeToBeDeleted>
2, Update the oracle_home inventory
on the node that you want to delete and to make it local ( local
)
[oracle@gract3 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=gract3 -local
3, Delete the oracle_home Oracle RAC software (local )
[oracle@gract3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
4,Update the oracle_home "nodes list" on the all(non-local) nodes
[oracle@gract1 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=gract1,gract2
[oracle@gract2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=gract1,gract2
[oracle@gract1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory
..
Rac system comprising of multiple nodes
Local node = gract1
Remote node = gract2
5,Verify whether the node to be deleted is active or not by using following command from the $CRS_HOME/bin directory: (non local)
[grid@gract1 ~]$ olsnodes -s -t
gract1 Active Unpinned
gract2 Active Unpinned
gract3 Active Unpinned
6,Deconfig the "Oracle Clusterware applications" and "daemons" running on the node. (local)
Run the rootcrs.pl script as root from the $CRS_HOME/crs/<install directory> on the node to be deleted
(if it is last node use the option -lastnode) as follows:
[root@gract3 Desktop]# $GRID_HOME/crs/install/rootcrs.pl -deconfig -force
7,Delete the node from cluster ( non local)
location : $CRS_HOME/bin directory as root to delete the node from the cluster [root@gract1 ~]# $GRID_HOME/bin/crsctl delete node -n gract3
CRS-4661: Node gract3 successfully deleted.
8,Update the grid_home "node list" on the node to be deleted ( gract3) (local)
location : CRS_HOME/oui/bin directory:
[grid@gract3 ~]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES=gract3 -local
9,Update the grid_home "node list" on the remaining nodes by running the ( all non-locals)
location : $CRS_HOME/oui/bin from each of the remaining nodes the cluster:
on gract1: ( non-local node1)
[grid@gract1 ~]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES={ gract1,gract2} -local
[grid@gract1 ~]$ $GRID_HOME/OPatch/opatch lsinventory
..
Patch level status of Cluster nodes :
Patching Level Nodes
-------------- -----
0 gract2,gract1
on gract2: (non-local node2 )
[grid@gract2 ~]$ $GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME CLUSTER_NODES={ gract1,gract2} -local
[grid@gract2 ~]$ $GRID_HOME/OPatch/opatch lsinventory
..
Patch level status of Cluster nodes :
Patching Level Nodes
-------------- -----
0 gract2,gract1
10,Deinstall the Oracle Clusterware home from the node that you want to delete:(local node )
grid@gract3 ~]$ $GRID_HOME/deinstall/deinstall -local
Check cluster and resources status of our 2-Note cluster
grid@gract2 ~]$ olsnodes -s -t
gract1 Active Unpinned
gract2 Active Unpinned