Remove failed node from 11gR2 RAC cluster

To delete a failed node and de-registered it from the cluster:

crsctl unpin css -n si01
crsctl delete node -n si01

crsctl stat res -t | grep si01
crsctl remove vip -i si01-vip -f

crsctl stat res -t
crsctl stat res -t | grep si01

 
Validate current stage before adding the node back:

cluvfy stage -pre nodeadd -n si01 -fixup -fixupdir /tmp
cluvfy stage -post hwos -n si01 -verbose
 

export IGNORE_PREADDNODE_CHECKS=Y

si02:/apps/grid/11.2.0/grid/oui/bin
./runInstaller -updateNodelist ORACLE_HOME='/apps/grid/11.2.0/grid' "CLUSTER_NODES=si02,si03" CRS=TRUE


Add node back to the cluster:

./addNode.sh -silent "CLUSTER_NEW_NODES={si01}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={si01-vip}"
  Starting Oracle Universal Installer...
 
  Checking swap space: must be greater than 500 MB.   Actual 32767 MB    Passed
  Oracle Universal Installer, Version 11.2.0.2.0 Production
  Copyright (C) 1999, 2010, Oracle. All rights reserved.
 
  
  Performing tests to see whether nodes si01,si03,si01 are available
  ............................................................... 100% Done.

WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/apps/oraInventory/orainstRoot.sh' with root privileges on nodes 'si01'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each cluster node.
/apps/oraInventory/orainstRoot.sh #On nodes si01
/apps/grid/11.2.0/grid/root.sh #On nodes si01
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node
   
The Cluster Node Addition of /apps/grid/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.


- root: /apps/oraInventory/orainstRoot.sh
Changing permissions of /apps/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /apps/oraInventory to oinstall.
The execution of the script is complete.
:/etc/rc.d/init.d
- root: /apps/grid/11.2.0/grid/root.sh
Running Oracle 11g root script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /apps/grid/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /apps/grid/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
CRS-4046: Invalid Oracle Clusterware configuration.
CRS-4000: Command Create failed, or completed with errors.
Failure initializing entries in /etc/oracle/scls_scr/si01
/apps/grid/11.2.0/grid/perl/bin/perl -I/apps/grid/11.2.0/grid/perl/lib -I/apps/grid/11.2.0/grid/crs/install /apps/grid/11.2.0/grid/crs/install/rootcrs.pl execution failed

To fix this CRS-4046, CRS-4000 issue:

/apps/grid/11.2.0/grid/crs/install

./rootcrs.pl -verbose -deconfig –force

Then re-run root.sh

- root: ./root.sh

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

No comments:

Post a Comment