Flex ASM

In previous release, ASM clients can only access ASM using ASM instance running on the same server.  If ASM instance on that node fails, the clients all fail.  With Flex ASM removes the hard dependency between ASM and database clients and therefore it does not require ASM instance on each node.  ASM clients can use a remote ASM network for communication (ASM metadata, data blocks, and etc.) between ASM and its clients network connection to access ASM, so if ASM instance fails, its clients can connect to another instance.  The default cardinality for ASM instances is three (similar as scan) regardless of the cluster size.  Its different with scan is that with two nodes flex cluster, there are 2 ASM instances not 3.

Up to three ASM listeners are registered as remote listeners for each database client.  When setting up 12c, if you choose to configure a flex Cluster, Flex ASM is inherited and you must specify an ASM network.  However, ASM does not require a Flex cluster.  It can run on standard cluster providing I/O services and there are no new instance parameters for Flex ASM.  The default parameter settings are suitable to support most situations.

Clients are automatically relocated to another instance and are transparent to end users.  DBAs can query v$asm_client and manual relocation using the command “alter system relocate client <client_id>) before planned maintenance, applying a patch or adjust the workload balance between instances.

Flex Cluster - 12c Grid Infrastructure and RAC New Features

With oracle 12c clusterware, "Flex clusters" are built to scale up to 2000 nodes that you can use Flex cluster to manage large pools of high availability and failover protection application resources with multiple databases and application running in one cluster.
  • Hub Nodes:  The main component of a Flex Cluster is a group of Hub Nodes.  There is only one group of Hub nodes in a Flex cluster deployment, and each Hub Node must be connected to Shared storage across the group of Hub Nodes. 
  • Leaf Nodes:  Zero or more Leaf Nodes can be connected to a Flex Cluster through a Hub Node. Leaf nodes are loosely coupled and they are associated with a single Hub Node and periodically hub nodes exchange heartbeat messages with their associated Leaf Nodes.  Failure of hub node or network results in leaf node eviction.  It does not require direct access to shared storage and on the same networks (public and private) as the Hub nodes.
  • Only the Hub Nodes have direct access to the OCR and voting disks
  • Hub-and-spoke topology is the key architecture feature that segments the cluster into groups of nodes.  Two fundamental impacts:  1)  limiting the size of the hub to reduce contention to OCR and voting disks and 2) less heartbeats network traffic exchange between the nodes
  • Clients on Leaf Nodes use GNS (Grid Naming Service) to locate Hub Node services.  This requires access to GNS through a fixed VIP running on one of the nodes so that Leaf Node clients have reliable naming service within the cluster.
  • You can disable or enable Flex cluster functionalities.  By the default, Flex cluster functionality is disabled.
To convert from a standard cluster to a Flex Cluster, you need to ensure that GNS is configured with a fixed VIP and then set the cluster cluster mode via "crsctl set cluster mode flex" .  To covert from a Flex cluster to a standard cluster "crsctl set cluster mode standard".

To show the current node role:
crsctl get node role status -node glsn02
Node glsn02 active role is 'hub

Administrator can explicitly specified the node role as hub or leaf:
crsctl set node role leaf -node glsn02

Administrator can also set the node role to auto.  This allows the cluster to decide which role a node will perform based on the composition of the cluster.  The auto role works with cluster hubsize setting.  If Hub Nodes is smaller than hubsize, the nodes joins the cluster as a Hub Node.  Else, it joins the cluster as a Leaf Node.

The leafmisscount setting defines the threshold (in seconds) duration for tolerable communication failure between a Hub Node and associated leaf Node, if more than the defined value, the leaf node is evicted from the cluster.  By default, leafmisscount setting is 30 seconds.

Flex cluster and Node Failure:
  • Nodes that are evicted from the cluster do not require a restart; only a cluster software restart
  • If a Hub Node fails:   1)  the node is evicted from the cluster (services on the hub node are relocated to another Hub nodes)  2) Leaf nodes can reconnect to other Hub nodes within the grace period otherwise they are evicted from the cluster.
  • If a Leaf Node fails:  the node is evicted from the cluster, the cluster attempts to relocate services running on the leaf node to other leaf nodes connected to the same Hub node.

sosreport utility

Collect debugging information about a system and stores the information in /tmp as compressed file.
sosreport uses plug-ins with option -l (list available plugin), -n PLUGNAME (do not load specified plugin), -e PLUGNAME (enable the specified plug-in)

To install it:
yum install sos

Loaded plugins: rhnplugin, security
rhel-x86_64-server-5                                                                                                                                                                    | 1.4 kB     00:00  
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package sos.noarch 0:1.7-9.62.el5 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

===============================================================================================================================================================================================================
 Package                                   Arch                                         Version                                               Repository                                                  Size
===============================================================================================================================================================================================================
Updating:
 sos                                       noarch                                       1.7-9.62.el5                                          rhel-x86_64-server-5                                       162 k

Transaction Summary
===============================================================================================================================================================================================================
Install       0 Package(s)
Upgrade       1 Package(s)

Total download size: 162 k
Is this ok [y/N]: y
Downloading Packages:
sos-1.7-9.62.el5.noarch.rpm                                                                                                                                                             | 162 kB     00:00  
Transaction Test Succeeded
Running Transaction
  Updating       : sos                                                                                                                                                                                     1/2
  Cleanup        : sos                                                                                                                                                                                     2/2

Updated:
  sos.noarch 0:1.7-9.62.el5                                                                                                                                                                                  

Complete!

To execute sosreport
sosreport

sosreport (version 1.7)

This utility will collect some detailed  information about the
hardware and  setup of your  Red Hat Enterprise Linux  system.
The information is collected and an archive is  packaged under
/tmp, which you can send to a support representative.
Red Hat will use this information for diagnostic purposes ONLY
and it will be considered confidential information.

This process may take a while to complete.
No changes will be made to your system.

Press ENTER to continue, or CTRL-C to quit.

One or more plugins have detected a problem in your configuration.
Please review the following messages:

process:
    * one or more processes are in state D (sosreport might hang)

Are you sure you would like to continue (y/n) ? y

Please enter your first initial and last name [gls02]: anguyengls02
Please enter the case number that you are generating this report for: 1

EMC PowerPath is installed.
 Gathering EMC PowerPath information...
EMC PowerPath is running.
 Gathering additional EMC PowerPath information...
 plugin emc finished ...                          
 plugin yum finished ...                        
 Completed.

Creating compressed archive...

Your sosreport has been generated and saved in:
  /tmp/sosreport-anguyengls02.1-710805-8771ca.tar.bz2

The md5sum is: ed33337f0f37d788f5bb61690b8771ca

Please send this file to your support representative.

SharePlex installation

Download Shareplex software SharePlex-7.6.1-b27-oracle100-aix-52-ppc-m64.tpm  from Quest.

GLSLAB1 >>> ulimit -aH
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        4194304
memory(kbytes)       unlimited
coredump(blocks)     unlimited
nofiles(descriptors) unlimited

Installation:
glsdb01:/apps/oracle/soft
GLSLAB1 >>> ./SharePlex-7.6.1-b27-oracle100-aix-52-ppc-m64.tpm  Unpacking ..................................................................
  ..........................................................................
  ..........................................................................
  ..........................................................................
  .................................................................
SharePlex for Oracle installation program:
    SharePlex Version: 7.6.1
    Supported Oracle Version: 10gR2
    Build platform: aix-52-ppc
    Target platform: aix-53-ppc
Please enter the product directory location? /app/software/shareplex
Please enter the variable data directory location? /data/oracle/splex/vardir
Please specify the SharePlex Admin group (select a number):
1. [oinstall]
2. dba
?  1
Please wait while the installer obtains Oracle information ........
Please specify the ORACLE_SID that corresponds to this installation (select a number):
1. [current => GLSLAB1]
2. REFRESH1
6. <Other ...>
1
Please enter the ORACLE_HOME directory that corresponds to this ORACLE_SID? [/u01/oracle/10.2.0/DB04]
Please enter the TCP/IP port number for SharePlex communications? [2100]
Preparing to install SharePlex for Oracle v. 7.6.1:
    User:                     oracle
    Admin Group:              oinstall
    Product Directory:        /splex/oracle/product/splex
    Variable Data Directory:  /data/oracle/splex/vardir
    ORACLE_SID:               GLSLAB1
    ORACLE_HOME:              /u01/oracle/10.2.0/DB04
Proceed with installation? [yes]
Installing ................................................................
  .........................................................................
  .........................................................................
  .........................................................................
  ..
Create Profile for Shareplex pr_splex and insert the following lines in .pr_splex profile

 cat .pr_splex
 export SP_SYS_HOST_NAME=glsdb01-vip
 export SP_COP_TPORT=2100
 export SP_COP_UPORT=2100
 export SP_SYS_VARDIR=/data/oracle/bkup1/splex/vardir
 #
 export SP_SW=/data/oracle/bkup1/splex/proddir
 export SP_BIN=${SP_SW}/bin
 #
 # -- For AIX Only
 # --

export EXTSHM=ON

cd $SP_BIN

To startup / shutdown / create configuration for shareplex:

Startup
$ cd /productdir/bin
$ ./sp_cop &
$ . /sp_ctrl

Create configuration
sp_ctrl(sysA)> create config od.config

Activate configurationsp_ctrl(sysA)> activate config od.config

Shutdown Shareplex
sp_ctrl > shutdown
splex_add_key script to start the SharePlex License Utility

List, Compare, show status

sp_ctrl(sysA)> list param all read
sp_ctrl > show compare
sp_ctrl > show compare detail
sp_ctrl > compare table from source to target
sp_ctrl > qstatus

GoldenGate Installation

High level GoldenGate Architecture

It provides log-based change data capture (CDC) nd replication of commited database transactions.  The software provies capture, routing, transformation and delivery of transactional data across heterogenerous enviornments in real time.
The movement of datain 4 steps:
1) Capture:  Changed data operations commited in the database transaction logsin a nonintrusive, high_performance, low-overhead implementation
2) Route:  variety of transport protocols and compress, encrypt changed data prior to routing
3) Transform: GoldenGate can execute a number of built-in functions like filtering and transformation
4) Apply:  It applies the changed transactional data with only sub-second latency

Installation:

Download the golden gate software ogg112102_ggs_Linux_s390x_ora11g_64bit.zip from edelivery.com
http://edelivery.oracle.com
select a langeguage ==> Continue ==> Export Validation and check the box to agree the license agreement
Select the Oracle Fusion Middleware Product Pact and select the appropriate platform and the Oracle GoldenGate Media Pack

Perform the same steps (1 to 4) in source and destination servers
1) tar -xvf fbo_ggs_Linux_x64_ora11g_64bit.tar
2) Make a symbolic link

ln -s $ORACLE_HOME/lib/libnnz11.so $ORACLE_HOME/lib/libnnz10.so
3) Set LD_LIBRARY_PATH and validate the libraries required by GoldenGate are installed by executing the follwoing shell comands
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/apps/11.2.0/gg
dd ggsci
ldd mgr
ldd extract
ldd replicat
The command will return an error message if any missing libraries..
DBATOOLS - oracle: ldd replicat
        linux-vdso.so.1 =>  (0x00007fff557fd000)
        libdl.so.2 => /lib64/libdl.so.2 (0x000000392e400000)
        libgglog.so => /apps/11.2.0/gg/libgglog.so (0x00002ba6aee97000)
        libggrepo.so => /apps/11.2.0/gg/libggrepo.so (0x00002ba6af0d3000)
        libdb-5.2.so => /apps/11.2.0/gg/libdb-5.2.so (0x00002ba6af227000)
        libicui18n.so.38 => /apps/11.2.0/gg/libicui18n.so.38 (0x00002ba6af4c8000)
        libicuuc.so.38 => /apps/11.2.0/gg/libicuuc.so.38 (0x00002ba6af829000)
        libicudata.so.38 => /apps/11.2.0/gg/libicudata.so.38 (0x00002ba6afb62000)
        libxerces-c.so.28 => /apps/11.2.0/gg/libxerces-c.so.28 (0x00002ba6b0b3e000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x000000392e800000)
        libantlr3c.so => /apps/11.2.0/gg/libantlr3c.so (0x00002ba6b1056000)
        libnnz11.so => /apps/11.2.0/DB/lib/libnnz11.so (0x00002ba6b116c000)
        libclntsh.so.11.1 => /apps/11.2.0/DB/lib/libclntsh.so.11.1 (0x00002ba6b1539000)
        libstdc++.so.6 => /usr/lib64/libstdc++.so.6 (0x000000392fc00000)
        libm.so.6 => /lib64/libm.so.6 (0x000000392ec00000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x0000003930000000)
        libc.so.6 => /lib64/libc.so.6 (0x000000392e000000)
        /lib64/ld-linux-x86-64.so.2 (0x000000392dc00000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x0000003931c00000)
        libaio.so.1 => /usr/lib64/libaio.so.1 (0x00002ba6b3ec7000)

4) ./ggsci
GGSCI (ljtcdb105.fnf.com) 2> create subdirs
Creating subdirectories under current directory /apps/11.2.0/gg
Parameter files                /apps/11.2.0/gg/dirprm: created (Parameter runtime configuration)
Report files                   /apps/11.2.0/gg/dirrpt: created (Process report files)
Checkpoint files               /apps/11.2.0/gg/dirchk: created (Golden Gate Checkpoint files)
Process status files           /apps/11.2.0/gg/dirpcs: created (Process Status)
SQL script files               /apps/11.2.0/gg/dirsql: created (SQL scipts)
Database definitions files     /apps/11.2.0/gg/dirdef: created (Source data definitions produced by DEFGEN and used to translate heterogeneous data)
Extract data files             /apps/11.2.0/gg/dirdat: created (Golden Gate trail and Extract files)
Temporary files                /apps/11.2.0/gg/dirtmp: created (Temporary storage for transactions that exceed memory)
Stdout files                   /apps/11.2.0/gg/dirout: created

5) Switch to archivelog mode
SQL> shutdown immediate ==> startup mount ==>alter database archivelog; alter database open;

6) Enable minimum sumplemental logging
SQL> alter database add supplemental log data;
SQL> alter system switch logfile;
SQL> select supplemental_log_data_min from v$database;

7) Turn off recyclebin (this is optional) and bounce the database
SQL> alter system set recyclebin=off scope=spfile;

8> Create a new tablespace GGATE and a new user GG_USER as Oracle GoldenGate schema name, then assigned it to this tablespace.
create tablespace ggate datafile '+DBA_PD101' size 5g autoextend on;
create user gg_user identified by oracle123 default tablespace GGATE temporary tablespace temp;

9) Grant necessary permission to gg_user
SQL> grant connect, resource, unlimited tablespace to gg_user;
SQL> grant execute on utl_file to gg_user;

10) Create necessary objects for ddl replication
SQL> @marker_setup.sql
Marker setup script
SQL> @ddl_setup.sql
Oracle GoldenGate DDL Replication setup script
SQL> @role_setup
GGS Role setup script
This script will drop and recreate the role GGS_GGSUSER_ROLE
SQL> GRANT GGS_GGSUSER_ROLE TO gg_user;
SQL> @ddl_enable.sql
Trigger altered.

ORA-00338: log 32 of thread 3 is more recent than control file

The solution is to apply patch 12770551.

We got this error ORA-338 in the Data Guard environment with ASYNC redo transport.  The connection with the remote database is closed and will not reopen until the next log switch.  Although it's not a critical situation and is not affect the database operation, applying the patch 12770551 is simple to get rid of the error.