Copy Backupsets from ASM to Local File System

SQL> alter system set db_recovery_file_dest_size = 20g;
System altered.

SQL> alter system set db_recovery_file_dest = '+FRA';
System altered.

SQL> alter database open;
Database altered.

oracle@rac01{MYDBA1}/apps/oracle> rman target /
RMAN> backup database plus archivelog delete input;

Starting backup at 30-MAY-11
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=28 instance=MYDBA1 device type=DISK
piece handle=+FRA/mydba/backupset/2011_05_30/annnf0_tag20110530t221427_0.320.752537669 tag=TAG20110530T221427 comment=NONE
piece handle=+FRA/mydba/backupset/2011_05_30/nnndf0_tag20110530t221430_0.318.752537671 tag=TAG20110530T221430 comment=NONE
piece handle=+FRA/mydba/backupset/2011_05_30/ncsnf0_tag20110530t221430_0.319.752537717 tag=TAG20110530T221430 comment=NONE
piece handle=+FRA/mydba/backupset/2011_05_30/annnf0_tag20110530t221520_0.322.752537721 tag=TAG20110530T221520 comment=NONE

Finished backup at 30-MAY-11

RMAN> list backup;

List of Backup Sets
===================
BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
1       51.25M     DISK        00:00:01     30-MAY-11     
        BP Key: 1   Status: AVAILABLE  Compressed: NO  Tag: TAG20110530T221427
        Piece Name: +FRA/mydba/backupset/2011_05_30/annnf0_tag20110530t221427_0.320.752537669

  List of Archived Logs in backup set 1
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    370     21024735   30-MAY-11 21065551   30-MAY-11
  2    348     21058468   30-MAY-11 21064126   30-MAY-11

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2       Full    1.52G      DISK        00:00:44     30-MAY-11     
        BP Key: 2   Status: AVAILABLE  Compressed: NO  Tag: TAG20110530T221430
        Piece Name: +FRA/mydba/backupset/2011_05_30/nnndf0_tag20110530t221430_0.318.752537671
  List of Datafiles in backup set 2
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  1       Full 21065563   30-MAY-11 +DATA/mydba/datafile/system.256.724854319
  2       Full 21065563   30-MAY-11 +DATA/mydba/datafile/sysaux.257.724854321
  3       Full 21065563   30-MAY-11 +DATA/mydba/datafile/undotbs1.258.724854323
  4       Full 21065563   30-MAY-11 +DATA/mydba/datafile/users.259.724854323
  5       Full 21065563   30-MAY-11 +DATA/mydba/datafile/undotbs2.264.724854533

BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
3       Full    17.70M     DISK        00:00:01     30-MAY-11     
        BP Key: 3   Status: AVAILABLE  Compressed: NO  Tag: TAG20110530T221430
        Piece Name: +FRA/mydba/backupset/2011_05_30/ncsnf0_tag20110530t221430_0.319.752537717
  SPFILE Included: Modification time: 30-MAY-11
  SPFILE db_unique_name: MYDBA
  Control File Included: Ckp SCN: 21065725     Ckp time: 30-MAY-11

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
4       8.00K      DISK        00:00:01     30-MAY-11     
        BP Key: 4   Status: AVAILABLE  Compressed: NO  Tag: TAG20110530T221520
        Piece Name: +FRA/mydba/backupset/2011_05_30/annnf0_tag20110530t221520_0.322.752537721

  List of Archived Logs in backup set 4
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    371     21065551   30-MAY-11 21065731   30-MAY-11

RMAN> backup backupset 1,2,3,4 format '/oracle/backup/testrman/%U';
 è this command is to backup the backupsets from ASM ‘+FRA’ to file system /data/backup/testrman

+FRA/mydba/backupset/2011_05_30/annnf0_tag20110530t221427_0.320.752537669
piece handle=/oracle/backup/testrman/01mdlk23_1_2 comment=NONE
+FRA/mydba/backupset/2011_05_30/nnndf0_tag20110530t221430_0.318.752537671
piece handle=/oracle/backup/testrman/02mdlk26_1_2 comment=NONE
+FRA/mydba/backupset/2011_05_30/ncsnf0_tag20110530t221430_0.319.752537717
piece handle=/oracle/backup/testrman/03mdlk3k_1_2 comment=NONE
channel ORA_DISK_1: finished piece 1 at 30-MAY-11
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:01
 +FRA/mydba/backupset/2011_05_30/annnf0_tag20110530t221520_0.322.752537721
piece handle=/oracle/backup/testrman/04mdlk3o_1_2 comment=NONE
channel ORA_DISK_1: finished piece 1 at 30-MAY-11
channel ORA_DISK_1: backup piece complete, elapsed time: 00:00:01
Finished backup at 30-MAY-11

RMAN> backup spfile include current controlfile format '/oracle/backup/testrman/source_ctl';
piece handle=/oracle/backup/testrman/source_ctl tag=TAG20110530T222524 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 30-MAY-11

Then you can copy these backup pieces to a new server and restore the database on file system.  You will need to restore spfile from the source backup and create spfile from pfile.  You need to modify the pfile for these parameters  (control_files, db_create_file_dest, db_file_name_convert, log_file_name_convert, undo_tablespace) .  After that, you can restore control file, restore database, recover database until cancel using backup controlfile, and open database resetlogs.


Oracle Clusterware Processes

Below are the components and processes of Oracle clusterware

[oracle@rac01 ~]$ ps -ef | grep -v grep | grep d.bin | awk '{print $8}' | cut -c28-50

Components                                                               Processes
Cluster Ready Service (CRS)                                        crsd
Cluster Synchronization Service (CSS)                         ocssd,cssdmonitor,cssdagent
Event Manager (EVM)                                                 evmd, evmlogger
Cluster Time Synchronzation Service (CSST)               octssd
Oracle Notification Service (ONS)                               ons, eons
Oracle Agent                                                                oraagent
Oracle Root Agent                                                       orarootagent
Grid Naming Service (GNS)                                          gnsd
Grid Plug and Play (GPnP)                                            gpnpd
Multicast domain name service (mDNS)                     mdnsd

Grid Infrasturuer Redundant Interconnect

The grid infrastrure redundant interconnect and ora.cluster_interconnect.haip without any 3-party failover (like bond, ipmp, etc) is supported by GI starting 11.2.0.2.  Multiple private network adapter can be defined during the installation or after via oifcfg.  Gi can activate 4 private network adapter at maximum.  Oracle GI automatically picks local addresses from reserved 169.254.*.* subnet for HAIP.  By default, interconnect traffic is load balanced across all active interconnect interface and will fail over to other adapters if one fails.

The redundant interconnect is based on a highly available virtual IP is assigned to each private network.  It will use virtual private IP 169.254.0.0 instead of real private IP.  For pre 11.2.0.2 instance, the init.ora parameter cluster_interconects by default is still use the real private IP.   It can be updated to take advantage of the new feature.

[root@rac0]# ./oifcfg getif
eth2  173.11.33.0  global  cluster_interconnect
eth3  173.11.33.0  global  cluster_interconnect
bond0  10.5.1.0    global  public

[root@rac01]# ./oifcfg iflist -p -n
eth2  173.11.33.0                PRIVATE  255.255.255.0
eth2  169.254.0.0                UNKNOWN  255.255.128.0 è is started by resource haip
eth3  172.17.33.0                PRIVATE  255.255.255.0
eth3  169.254.0.0                UNKNOWN  255.255.128.0 è is started by resource haip

bond0  10.5.1.0                   PRIVATE  255.255.255.0

[oracle@rac01]$ ./crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac01              Started            
ora.cluster_interconnect.haip

HAIP can be validated via ifconfig –a or in ASM and database instance alerts.


ifconfig –a

eth2:1    Link encap:Ethernet  HWaddr 64:31:50:50:BC:FA 
          inet addr:169.254.xx.xx  Bcast:169.254.127.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          Interrupt:170 Memory:f8000000-f8012800

eth3:1    Link encap:Ethernet  HWaddr 64:31:50:50:BC:FC 
          inet addr:169.254.xxx.xxx  Bcast:169.254.255.255  Mask:255.255.128.0
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          Interrupt:178 Memory:f6000000-f6012800

adrci> set homepath diag/asm/+asm/+ASM1
adrci> show alert -P "MESSAGE_TEXT like '%eth2:1%'"

ADR Home = /apps/oracle/diag/asm/+asm/+ASM1:
*************************************************************************
Output the results to file: /tmp/alert_10218_4735_+ASM1_12.ado

2011-05-27 15:00:39.753000 -05:00
Private Interface 'eth2:1' configured from GPnP for use as a private interconnect.
  [name='eth2:1', type=1, ip=169.254.x.x, mac=6-31-50-4e-054, net=169.254.0.0/17, mask=255.255.128.0, use=haip:cluster_interconnect/62]
2011-05-27 16:28:08.301000 -05:00
Private Interface 'eth2:1' configured from GPnP for use as a private interconnect.
  [name='eth2:1', type=1, ip=169.254.x.x, mac=64-31-50-4e-054, net=169.254.0.0/17, mask=255.255.11.0, use=haip:cluster_interconnect/62]

Grid infrastrcture multicast issue - Oracle Bug 9974223

When installing 11.2.0.2 grid infrastructure at the part of root.sh command, it runs successfully on the first node, but the subsequent nodes are failed on private NIC “Multicast Failed for eth2 using address 230.0.1.0:42000”.  This is Oracle Bug 9974223

Failed to start Oracle Clusterware stack
Failed to start Cluster Synchronization Service in clustered mode at /apps/grid/11.2.0/grid/crs/install/crsconfig_lib.pm line 1016.
/apps/grid/11.2.0/grid/perl/bin/perl -I/apps/grid/11.2.0/grid/perl/lib -I/apps/grid/11.2.0/grid/crs/install /apps/grid/11.2.0/grid/crs/install/rootcrs.pl execution failed

Before kicking off the runInstaller  RAC grid infrastructure software, you should run mcasttest utility to test the availability of muticast addresses.  It can be downloaded from Oracle metalink Note:  1212703.1

[oracle@srv01 mcasttest]$ ./mcasttest.pl -n srv01,srv02,srv03 -i eth2
###########  Setup for node srv01  ##########
Checking node access 'srv01'
Checking node login 'srv01'
Checking/Creating Directory /tmp/mcasttest for binary on node 'srv01'
Distributing mcast2 binary to node 'srv01'
###########  Setup for node srv02  ##########
Checking node access 'srv02'
Checking node login 'srv02'
Checking/Creating Directory /tmp/mcasttest for binary on node 'srv02'
Distributing mcast2 binary to node 'srv02'
###########  Setup for node srv03  ##########
Checking node access 'srv03'
Checking node login 'srv03'
Checking/Creating Directory /tmp/mcasttest for binary on node 'srv03'
Distributing mcast2 binary to node 'srv03'
###########  testing Multicast on all nodes  ##########

Test for Multicast address 230.0.1.0

May 27 15:29:56 | Multicast Failed for eth2 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

May 27 15:29:57 | Multicast Succeeded for eth2 using address 224.0.0.251:42001

You should apply patch Oracle Bug 9974223 to grid home before running root.sh (opatch napply -local -oh /apps/grid/11.2.0/grid -id 9974223)

If you already ran root.sh on the first node and it was successful but failed on the second node, then do the following steps:

crsctl stop cluster all
crsctl stop crs

 Prior to applying this part of the fix, you must invoke this script as root to unlock protected files.

 As root:               <CRS_HOME>/crs/install/rootcrs.pl –unlock

As oracle:            opatch napply -local -oh <CRS_HOME> -id 9974223

As root:                <CRS_HOME>/crs/install/rootcrs.pl –patch

Then continue on the subsequent nodes, as root run ./root.sh


Linux performance monitoring commands

uptime --The amount of time the system up and running
last --Log of system shutdowns and changes in run level
ps -flu oracle:  Processes are owned by the user oracle
pidof ora_arc0_LABDB1:  process identifier
free, cat /proc/meminfo
ipcs –m:  Shared memory segments on the system
ipcrm –M : delete shared memory segments by a user
pmap –x 2103 : details memory mapped by that particular process
ps -flu oracle | grep SCAN => cd /proc/7835 (is the scan process id)
oracle@si01.an.com{LABDB1}/proc/7835> cat status
ipcs –m
------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status     
0x68d7e0dc 17727490   oracle    660        4096       0       


/usr/sbin/lsof –u oracle | grep 17727490
slabtop (sort by u: memory,  


strace -p 26175 -o pmon_strace.txt
Process 26175 attached - interrupt to quit
Process 26175 dettached

Oracle Restart

Oracle Restart is designed to improve the availability of your Oracle Database.  It implements as high availability solution for single instance (nonclustered) environments only.  It can monitor the health and automatically restart the below componenets in the proper order in a accordance with component dependencies.  The Oracle Restart runs out of the Oracle Grid Infrastructure home.

Database Instance
Oracle Net Listener
Database Services
ASM instance
ASM diskgroups
ONS/eONS

It’s started by the OS init daemon

h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null

To control the state of Oracle Restart

srv202:/u01/apps/11.2.0/grid/bin
+ASM > crsctl config has
+ASM> crsctl enable has
+ASM> crsctl disable has
+ASM> crsctl start has
+ASM > crsctl stop has

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'srv202'
CRS-2673: Attempting to stop 'ora.WESBNDV.lsnr' on 'srv202'
CRS-2673: Attempting to stop 'ora.wesbndv.db' on 'srv202'
CRS-2673: Attempting to stop 'ora.wesbstg.db' on 'srv202'
CRS-2673: Attempting to stop 'ora.ORDCDM00.lsnr' on 'srv202'
CRS-2673: Attempting to stop 'ora.WESBSTG.lsnr' on 'srv202'
CRS-2677: Stop of 'ora.WESBSTG.lsnr' on 'srv202' succeeded
CRS-2677: Stop of 'ora.WESBNDV.lsnr' on 'srv202' succeeded
CRS-2677: Stop of 'ora.ORDCDM00.lsnr' on 'srv202' succeeded
CRS-2677: Stop of 'ora.wesbstg.db' on 'srv202' succeeded
CRS-2677: Stop of 'ora.wesbndv.db' on 'srv202' succeeded
CRS-2673: Attempting to stop 'ora.DG_DATA.dg' on 'srv202'
CRS-2677: Stop of 'ora.DG_DATA.dg' on 'srv202' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'srv202'
CRS-2677: Stop of 'ora.asm' on 'srv202' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'srv202'
CRS-2677: Stop of 'ora.cssd' on 'srv202' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'srv202'
CRS-2677: Stop of 'ora.diskmon' on 'srv202' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'srv202' has completed
CRS-4133: Oracle High Availability Services has been stopped.

+ASM > crsctl start has

CRS-4123: Oracle High Availability Services has been started.

srv202:/u01/apps/11.2.0/grid/bin
+ASM > crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora.DG_DATA.dg ora....up.type ONLINE    ONLINE    srv202  
ora....00.lsnr ora....er.type ONLINE    ONLINE    srv202  
ora....DV.lsnr ora....er.type ONLINE    ONLINE    srv202  
ora....TG.lsnr ora....er.type ONLINE    ONLINE    srv202  
ora.asm        ora.asm.type   ONLINE    ONLINE    srv202  
ora.cssd       ora.cssd.type  ONLINE    ONLINE    srv202  
ora.diskmon    ora....on.type ONLINE    ONLINE    srv202  
ora.wesbndv.db ora....se.type ONLINE    ONLINE    srv202  
ora.wesbstg.db ora....se.type ONLINE    ONLINE    srv202  

Public, Virtual, Private, SCAN, HAIP in RAC enviornment

In an Oracle RAC environment
  • Public:  Add in the /etc/hosts, register with DNS, same subnet with Virtual and Scan IPs, and pingable
  • Virtual: Add in the /etc/hosts, register with DNS, same subnet with public and scan, not pingable 
  • Private:   Add in the /etc/hosts and pingable
10.125.70.171    lldvdba02.cbm.com                        lldvdba02
10.125.70.172    lldvdba02-vip.cbm.com                  lldvdba02-vip
172.115.72.130  lldvdba02-priv.cbm.com                 lldvdba02-priv
  •    SCAN :  Register in DNS, Not in /etc/hosts, not pingable
nslookup dbapoc-scan
Server:         10.250.129.79
Address:        10.250.129.79#53

Name:   dbapoc-scan.cbm.com
Address: 10.125.70.200
Name:   dbapoc-scan.cbm.com
Address: 10.125.70.201
Name:   dbapoc-scan.cbm.com
Address: 10.125.70.202

  • HAIP (High Availability IPs):  new in 11.2.0.2.   According to metalink source, HAIP is the redundant interconnect (bond, ipmp or similar).  Multiple private network adapters can be defined either during the installation phase or after using the oifcfg.  Grid Infrastructure can activate a maximum of 4 private network adapters at a time even if more are defeined.  The ora.cluster_interconnect_haip will start one to 4 link local HAIP on private network adapters for interconnect communication for Oracle RAC, ASM, ACFS,etc.  Grid automatically picks link local addresses from reserved 169.254.*.* subnet for HAIP, and it will not attempt to use any 169.254*.* address if it's already in use for another purpose.  With HAIP, by default, interconnect traffic will be load balanced across all active interconnect interfaces, and corresponding HAIP address will be failed over transparently to other adapters if one fails and becomes non-communicative.  the number of HAIP addresses is decided by how many private network adapters are active when Grid comes up on the first node in the cluster.  If there's only one active private network, Grid will create one; if two, Grid will create two; and if more than two, Grid will create 4 HAIPs.  The number of HAIPs won't change even if more private network adapters are activated later, a restart of clusterware on all nodes is required for new adapters to become effective.
  •  Network group will need to assign IP addresses for Public, Virtual, and SCAN then register them with DNS.

Useful RMAN commands


 spool MYDBINFO

SET PAGESIZE 20000
SET LINESIZE 1000
SET TRIMSPOOL ON
SET PAUSE OFF
SET SERVEROUTPUT ON
SET FEEDBACK ON
SET ECHO ON
SET NUMFORMAT 999999999999999
COL TABLESPACE_NAME FORMAT A50
COL FILE_NAME FORMAT A50
COL NAME FORMAT A50
COL MEMBER FORMAT A50
col DFILE_CHKP_CHANGE format a40
col DFILE_HED_CHKP_CHANGE format a40
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';


ARCHIVE LOG LIST;

SELECT * FROM v$instance;
SELECT * FROM v$recover_file;
SELECT * FROM v$recovery_log;
select distinct status from v$backup;
select distinct(status) from v$datafile;
select distinct (to_char(checkpoint_change#)) from v$datafile;
select distinct (to_char(checkpoint_change#)) from v$datafile_header;
 
select * from v$backup;

SELECT dbid,
name,
TO_CHAR(created, 'DD-MON-YYYY HH24:MI:SS') created,
open_mode,
log_mode,
TO_CHAR(checkpoint_change#, '999999999999999') as checkpoint_change#,
controlfile_type,
TO_CHAR(controlfile_change#, '999999999999999') as controlfile_change#,
TO_CHAR(controlfile_time, 'DD-MON-YYYY HH24:MI:SS') controlfile_time,
TO_CHAR(resetlogs_change#, '999999999999999') as resetlogs_change#,
TO_CHAR(resetlogs_time, 'DD-MON-YYYY HH24:MI:SS') resetlogs_time
FROM v$database;

SELECT f.name, b.status, b.change#, b.time
FROM v$backup b,
v$datafile f
WHERE b.file# = f.file#
AND b.status = 'ACTIVE';

SELECT name,
file#,
status,
enabled,
creation_change#,
TO_CHAR(creation_time, 'DD-MON-YYYY HH24:MI:SS') as creation_time,
TO_CHAR(checkpoint_change#, '999999999999999') as checkpoint_change#,
TO_CHAR(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
TO_CHAR(offline_change#, '999999999999999') as offline_change#,
TO_CHAR(online_change#, '999999999999999') as online_change#,
TO_CHAR(online_time, 'DD-MON-YYYY HH24:MI:SS') as online_time,
TO_CHAR(bytes, '9,999,999,999,990') as bytes
FROM v$datafile
where status <> 'ONLINE'
OR checkpoint_change# <> (SELECT checkpoint_change# FROM v$database);

SELECT name,
file#,
status,
error,
creation_change#,
TO_CHAR(creation_time, 'DD-MON-YYYY HH24:MI:SS') as creation_time,
TO_CHAR(checkpoint_change#, '999999999999999') as checkpoint_change#,
TO_CHAR(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
TO_CHAR(resetlogs_change#, '999999999999999') as resetlogs_change#,
TO_CHAR(resetlogs_time, 'DD-MON-YYYY HH24:MI:SS') as resetlogs_time,
TO_CHAR(bytes, '9,999,999,999,990') as bytes
FROM v$datafile_header
WHERE status <> 'ONLINE'
OR checkpoint_change# <> (SELECT checkpoint_change# FROM v$database);

SELECT status,
checkpoint_change#,
TO_CHAR(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
count(*)
FROM v$datafile_header
GROUP BY status, checkpoint_change#, checkpoint_time
ORDER BY status, checkpoint_change#, checkpoint_time;

SELECT dd.FILE#,
dd.NAME,
dd.STATUS,
to_char(dd.checkpoint_change#,'999999999999999') dfile_chkp_change,
to_char(dh.checkpoint_change#,'999999999999999') dfile_hed_chkp_change,
dh.recover,
dh.fuzzy
FROM v$datafile dd,
v$datafile_header dh
WHERE dd.FILE#=dh.FILE#;

SELECT name file_name,
recover,
fuzzy,
checkpoint_change#
FROM v$datafile_header
ORDER BY checkpoint_change#;

SELECT hxfil file_num,
hxfnm file_name,
fhtyp type,
hxerr validity,
fhscn scn,
fhtnm tablespace_name,
fhsta status ,
fhrba_seq sequence
FROM x$kcvfh;

SELECT group#,
thread#,
sequence#,
members,
archived,
status,
TO_CHAR(first_change#, '999999999999999') as first_change#
FROM v$log;

SELECT group#,
member
FROM v$logfile;

SELECT a.recid,
a.thread#,
a.sequence#,
a.name,
a.archived,
a.deleted,
TO_DATE(a.completion_time, 'DD-MON-YYYY HH24:MI:SS') as completed
FROM v$archived_log a, v$log l
WHERE a.thread# = l.thread#
AND a.sequence# = l.sequence#;