ORA-00312, ORA-00338


ORA-00312: online log 5 thread 3: '+DATA_CD501/oltp/onlinelog/group_5.264.799435405'
ORA-00338: log 5 of thread 3 is more recent than control file
ORA-00312: online log 5 thread 3: '+DATA_CD501/oltp/onlinelog/group_5.265.799435401'
ORA-00338: log 5 of thread 3 is more recent than control file
ORA-00312: online log 5 thread 3: '+DATA_CD501/oltp/onlinelog/group_5.264.799435405'
ORA-00338: log 5 of thread 3 is more recent than control file

Bug 12770551.
Please go through below doc for reference :
Bug 12770551 - Frequent ORA-338 during controlfile restore with ASYNC Data Guard (Doc ID 12770551.8)

Check out the patch for download

https://updates.oracle.com/Orion/PatchDetails/process_form?patch_num=12770551

How to trace a data pump process


1)  Run the normal export data pump or you can prepare a parfile something like this..

FULL=Y
DUMPFILE=EXPDP_DUMPDIR:expdp_MYDB_ROWS_FULL%U_012413:1800.dmp
CLUSTER=N
LOGFILE=EXPDP_LOGDIR:expdp_MYDB_ROWS_FULL_012413:1800.log
PARALLEL=4
CONTENT=ALL

2. Connect to sqlplus AS SYSDBA user and obtain Data Pump process information:

 set lines 150 pages 100 numwidth 7
set time on
 col program for a38
 col username for a10
 col spid for a7
 select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,
s.status, s.username, d.job_name, p.spid, s.serial#, p.pid
from v$session s, v$process p, dba_datapump_sessions d
where p.addr=s.paddr and s.saddr=d.saddr;

3. From the same sqlplus session and using the information from the above step, attach to the Datapump Worker (DW01) process:

SQL> oradebug setospid <spid_of_dw_process>

4. Run the following command every few minutes

SQL> oradebug current sql

There are master process trace and worker process trace.

High I/O and CPU waits during Oracle data pump expdp is slow / hung


If you run into a situation that the data pump is taking such a long time /hanging and consuming all resources (meory, CPU, I/O wait, etc.), you should check your paging/swaping related parameters in /etc/sysctl.conf.

Example:


- oracle: vmstat 10 10
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 14 930336 673184 84964 51242600 0 0 273 113 0 0 2 4 93 1 0
27 15 930328 628948 85332 51281352 1 0 7753 3403 1448 16757 1 69 18 11 0
54 16 930340 628852 83536 51264948 11 0 342 1832 1398 14427 0 98 0 1 0
34 18 930232 641848 85048 51256972 24 1 2177 2033 1989 21538 0 89 4 7 0
0 14 929916 659768 85684 51253524 37 7 1527 1642 1480 15844 1 75 12 12 0
6 14 929944 629840 86188 51273780 381 1 4967 3622 2092 24876 1 58 25 16 0
10 14 930120 629916 85744 51275736 3 0 4663 3254 1568 18072 1 61 20 18 0
19 14 930352 629204 83520 51254128 11 22 363 1594 1267 15785 0 79 8 13 0
9 15 930436 667332 84216 51242292 18 23 921 1606 1525 14833 0 96 2 2 0
36 17 930396 629960 86064 51274904 366 1 5928 2564 2142 24199 1 28 46 25 0

So we added the below settings to the /etc/sysctl.conf file to reduce the paging/swapping.  After making the changes in /etc/sysctl.conf, the data pump job was finished 1 hr (instead of 12 hrs, hang, and consumed server resources).  You should work with your system administrator or Redhat Support to make sure these parameters are appropriate for your enviornment and monitor the results.

#BEGIN#
#Reduce swapping:
vm.swappiness = 10
#Maximum percentage of active memory that can have dirty pages:
vm.dirty_background_ratio=3
#Maximum percentage of total memory that can have dirty pages:
vm.dirty_ratio=15
#How long data can be in page cache before being expired:
vm.dirty_expire_centisecs=500
#How often pdflush is activated to clean dirty pages:
vm.dirty_writeback_centisecs=100
#END#


Exadata I/O Resource Manager (IORM) and Database Resource Manager (DBRM)


I/O Resource Manager (IORM) provides a way to manage I/O resource bandwidth for multiple databases in an exadata environment.  It helps you to priority I/O of a production database over test databases, divide I/O resources among different classes of queries (DSS, reporting) or non-critical jobs ETL, etc.

With the traditional solution, DBAs put critical databases on dedicated storage , add more disks, or reschedule non-critical tasks at off-peak/after business hours, which are expensive and tedious solutions. To solve this issue, Exadata's IORM manges I/Os based on your prioritization and resource usage regulation based on user's priority, consumer groups and resource plan.

1)  Create Consumer Groups for each type of similar workload and create rules to dynamically map sessions to consumer groups based on session attributes

2)  Create Resource Plans:  3 plans 

     Ratio-Based Plan:  OLTP 60%, DSS 30%, Maintenance 10%

     Priority-Based Plan:  Priority 1:  OLTP, Priority 2:  DSS, Priority 3 :  Maintenance

     Hybrid Plan:  Level 1        Level 2
     OLTP             80 %         
     DSS               10                90%
     Maintenance   5%

Configuring Consumer groups and Plans using dbms_resource_manager package, resource manager enterprise manager.
You can create multiple plans ( day plan, night plan, maintenance plan), but only one plan can be enabled at a time.  To set plan you can set the resource_manager_plan parameter or use the job scheduler to enable the plan.

IORM allow multiple databases to share exadata storage effectively by not allow the test databases to impact production database and share the resource among production databases effectively.  

Inter-database plan:  allocates resources for each database and the plans can be configure/enable via cellcli.  Exadata use Inter-database plans and intra-database(based on consumer groups from that database) to prioritize the production databases over Standby, QA and test databases.

DatabaseLevel 1Level 2Level 3
Production OLTP70%

Production Reporting30

DR OLTP Standby 
100
QA database

70
Development database

30

Category Plan:  Category is an attribute of each consumer group.  The plan allocates resources for each category to manage multiple workload types
by using category plan, inter-database plan and intra-database plan.