Grid infrastrcture multicast issue - Oracle Bug 9974223

When installing 11.2.0.2 grid infrastructure at the part of root.sh command, it runs successfully on the first node, but the subsequent nodes are failed on private NIC “Multicast Failed for eth2 using address 230.0.1.0:42000”.  This is Oracle Bug 9974223

Failed to start Oracle Clusterware stack
Failed to start Cluster Synchronization Service in clustered mode at /apps/grid/11.2.0/grid/crs/install/crsconfig_lib.pm line 1016.
/apps/grid/11.2.0/grid/perl/bin/perl -I/apps/grid/11.2.0/grid/perl/lib -I/apps/grid/11.2.0/grid/crs/install /apps/grid/11.2.0/grid/crs/install/rootcrs.pl execution failed

Before kicking off the runInstaller  RAC grid infrastructure software, you should run mcasttest utility to test the availability of muticast addresses.  It can be downloaded from Oracle metalink Note:  1212703.1

[oracle@srv01 mcasttest]$ ./mcasttest.pl -n srv01,srv02,srv03 -i eth2
###########  Setup for node srv01  ##########
Checking node access 'srv01'
Checking node login 'srv01'
Checking/Creating Directory /tmp/mcasttest for binary on node 'srv01'
Distributing mcast2 binary to node 'srv01'
###########  Setup for node srv02  ##########
Checking node access 'srv02'
Checking node login 'srv02'
Checking/Creating Directory /tmp/mcasttest for binary on node 'srv02'
Distributing mcast2 binary to node 'srv02'
###########  Setup for node srv03  ##########
Checking node access 'srv03'
Checking node login 'srv03'
Checking/Creating Directory /tmp/mcasttest for binary on node 'srv03'
Distributing mcast2 binary to node 'srv03'
###########  testing Multicast on all nodes  ##########

Test for Multicast address 230.0.1.0

May 27 15:29:56 | Multicast Failed for eth2 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

May 27 15:29:57 | Multicast Succeeded for eth2 using address 224.0.0.251:42001

You should apply patch Oracle Bug 9974223 to grid home before running root.sh (opatch napply -local -oh /apps/grid/11.2.0/grid -id 9974223)

If you already ran root.sh on the first node and it was successful but failed on the second node, then do the following steps:

crsctl stop cluster all
crsctl stop crs

 Prior to applying this part of the fix, you must invoke this script as root to unlock protected files.

 As root:               <CRS_HOME>/crs/install/rootcrs.pl –unlock

As oracle:            opatch napply -local -oh <CRS_HOME> -id 9974223

As root:                <CRS_HOME>/crs/install/rootcrs.pl –patch

Then continue on the subsequent nodes, as root run ./root.sh


1 comment:

  1. Thanks for your wonderful informative post and i got a good knowledge to read your informational post. precisionsignz

    ReplyDelete