AWR – Automatic Workload Repository

Some useful commands to look up AWR information

select d.tablespace_name, d.file_id, file_name, ceil(max((e.block_id+e.blocks)*8192)/1024/1024) min_mb , d.bytes/1024/1024 MB 
from dba_extents e, dba_data_files d
where e.file_id (+) = d.file_id
and d.tablespace_name = 'SYSAUX'
group by d.tablespace_name, d.file_id, file_name, d.bytes;

 Run $ORACLE_HOME/rdbms/admin/awrinfo.sql connected as sysdba.  This would give space usage per AWR component


287679.1 - Space Management In Sysaux Tablespace with AWR in Use
782974.1 - How to Recreate The AWR ( AUTOMATIC WORKLOAD ) Repository ?
329984.1 Usage and Storage Management of SYSAUX tablespace occupants SM/AWR, SM/ADVISOR, SM/OPTSTAT and SM/OT

AWR tracing, Baseline, Snapshot

alter session set events 'immediate trace name awr_test level 16';

Check how many Baselines you have for this AWR repository:

select  count(*) from DBA_HIST_BASELINE;

EXECUTE DBMS_WORKLOAD_REPOSITORY.
DROP_BASELINE (baseline_name => ‘PEAK_MONTH’);
EXECUTE DBMS_WORKLOAD_REPOSITORY.
DROP_SNAPSHOT_RANGE(102, 105);

set pagesize 400
 col baseline_name form a20
  select * from wrm$_baseline order by baseline_id;
  select count(*) from wrh$_sqltext where ref_count != 0 ;
  select count(*) from wrh$_sqltext where sql_id not in (select sql_id from
  wrh$_sqlstat union select sql_id from wrh$_sqlstat_bl);
 select min(snap_id), max(snap_id) from dba_hist_snapshot ;
 .
 select snap_id, count(*)
 from WRH$_SQL_PLAN t
 group by snap_id
 order by snap_id
 /
 .
 select SNAP_ID, TIMESTAMP from WRH$_SQL_PLAN  order by TIMESTAMP;
 /
 .
 

How to start tracing the Data Pump job

Please refer to Note 286496.1 Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump.  Depending on how the Export or Import Data Pump job was started, there are several ways to activate tracing of the Data Pump processes.

Use the TRACE parameter upon the start of the job.

Start tracing by specifying the TRACE parameter and a trace level when the Export Data Pump or Import Data Pump is started. This is the standard way to activate Data Pump tracing.

Example:
expdp system/manager DIRECTORY=my_dir DUMPFILE=expdp_s.dmp LOGFILE=expdp_s.log TABLES=scott.emp TRACE=480300

Use the TRACE parameter when restarting a job.

If an Export or Import Data Pump  job was started without the TRACE parameter, it can be temporary stopped, and restarted with the TRACE parameter.
Example:
% expdp system/manager DIRECTORY=my_dir DUMPFILE=expdp_f.dmp LOGFILE=expdp_f.log TABLES=scott.emp TRACE=480300

-- press Control-C to go to the Interactive Command mode,
-- and temporarystop the job with the STOP_JOB command:

Export> stop
Are you sure you wish to stop this job ([yes]/no): yes

-- use the system generated job-name SYS_EXPORT_FULL_01 to re-attach to the job
-- and specify the TRACE parameter with a tracing level:

expdp system/manager ATTACH=sys_export_full_01 TRACE=480300
Export> status

Data Guard Command-Line

si01:/home/oracle
DGLABS - oracle: dgmgrlDGMGRL for Linux: Version 11.2.0.2.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected.

DGMGRL> show configuration
Configuration - DGLABS
  Protection Mode: MaxPerformance
  Databases:
    DGLABS - Primary database
    STBYLAB - Physical standby database

Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS


DGMGRL> show database 'DGLABS'
Database - DGLABS
  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    DGLABS

Database Status:
SUCCESS


DGMGRL> edit database 'DGLABS' set property logshipping='OFF';
DGMGRL> edit database 'DGLABS' set property logshipping='ON';
DGMGRL> show database 'DGLABS' logshipping;
DGMGRL> edit database STBYLAB set state='APPLY-OFF';
DGMGRL> edit database 'STBYLAB' set state='APPLY-ON';

....

Implement Oracle Data Guard using EM Grid Control.

If you want to use the Grid Control to create and manage Data Guard, you need the following components: Oracle Management Server (OMS), repository database installed, and Grid Control Agent installed on the servers in your Data Guard Configuration - both primary and standby database systems

The remaining steps are just navigating along and answering the questions.  Once you finish answering questions, just click Finish. 

Below are the example of configurations that the answers I made to all the questions.

Primary Database              
Target Name                      CHICAGO               
Database Name                  chicago   
Instance Name                   CHICAGO               
Database Version               11.2.0.2.0               
Oracle Home                      /oracle/product/11.2.0/RDBMS   
Host                                 si01.an.com          
Operating System               Red Hat Enterprise Linux Server release 5.6 (Tikanga) 2.6.18          
Host Username                   oracle   

Standby Database             
Target Name                      NEWYORK             
Database Name                  CHICAGO               
Instance Name                   CHICAGO               
Oracle Server Version          11.2.0.2.0               
Oracle Home                      /oracle/product/11.2.0/RDBMS   
Host                                 pi01.an.com         
Operating System:              Red Hat Enterprise Linux Server release 5.6 (Tikanga) 2.6.18          
Host Username                   oracle      
Backup Type                      New backup         
File Transfer Method            RMAN duplicate                 
Database Unique Name        NEWYORK             
Database Storage                Automatic Storage Management                
ASM Instance                    +ASM_pi01.an.com  
Standby Type                      Physical Standby                
Database Area                    DG_DATA01         
Flash Recovery Area            DG_FLASH01        
Flash Recovery Area Size (MB)   10061M                  
Automatically Delete Archived Redo Log Files     Yes           
Configure Standby Database with Oracle Restart               No
                 
Standby Database Storage             
Database Area                                           DG_DATA01  
Flash Recovery Area                                    DG_FLASH01  
Multiplex Redo Log Files and Control Files      No  


Tablespaces         
Name Size (MB)                   Status           Type      Standby Location               
SYSAUX                781         ONLINE         PERMANENT      DG_DATA01         
SYSTEM                731         ONLINE         PERMANENT      DG_DATA01         
TEMP2                  1025       ONLINE         TEMPORARY      DG_DATA01         
TOOLS                  101         ONLINE        PERMANENT      DG_DATA01         
UNDOTBS1            71          ONLINE         UNDO              DG_DATA01         
USERS                  54           ONLINE        PERMANENT      DG_DATA01      
  

Directory Objects               

Directory             NamePrimary Directory Path      Standby Directory Path   
MY_EXP                        /tmp/dba/CHICAGO           /tmp/dba/CHICAGO         
DATA_PUMP_DIR           /admin/CHICAGO/dpdump/ /admin/CHICAGO/dpdump/  

ERROR: NMO not setuid-root (Unix-only)


When setting up the standby database uing setup control, you could run into ERROR: NMO not setuid-root (Unix-only).  This error means that the permission of $AGENT_HOME/bin/nmo and $AGENT_HOME/bin/nmb are not correct.  The root cause could be that root.sh script was not run after the Agent installation.

Solution:
 
chown root nmo
chmod 6750
 chown root nmb
 chmod 6750 nmb

Disabled by Upload Manager - emctl status agent

oracle@si01:/apps/product/MW/agent11g/sysman/emd> emctl status agent
Oracle Enterprise Manager 11g Release 1 Grid Control 11.1.0.1.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
Agent Version     : 11.1.0.1.0
OMS Version       : 11.1.0.1.0
Protocol Version  : 11.1.0.0.0
Agent Home        : /apps/oracle/product/MW/agent11g
Agent binaries    : /apps/oracle/product/MW/agent11g
Agent Process ID  : 15478
Parent Process ID : 15458
Agent URL         : https://si01.an.com:3872/emd/main/
Repository URL    : https://si01.an.com:4900/em/upload
Started at        : 2010-06-14 12:39:18
Started by user   : oracle
Last Reload       : 2010-07-15 18:01:54
Last successful upload                       : 2010-07-29 16:08:52
Total Megabytes of XML files uploaded so far :  4571.19
Number of XML files pending upload           :        0
Size of XML files pending upload(MB)         :     0.00
Available disk space on upload filesystem    :     0.04%
Collection Status                            : Disabled by Upload Manager
Last successful heartbeat to OMS             : 2010-07-29 16:55:09

Solution:

oracle@si01:/apps/product/MW/agent11g/sysman/emd> rm lastupld.xml agntstmp.txt

oracle@si01:/apps/product/MW/agent11g/sysman/emd/upload> rm -f *
oracle@si01:/apps/product/MW/agent11g/sysman/emd/state> rm -f *.*
emctl secure agent

Securing agent...   Started.
Enter Agent Registration Password :

emctl stop agent
emctl start agent




TCP tcp_keepalive_time, tcp_keepalive_intvl,tcp_keepalive_probes


You need to check with your network admin for firewall timeout value.  Then you should modify kernel parameter net.ipv4.tcp_keepalive_time to a value lower than the firewall timeout values. This should give the TCP keepalive a chance to keep the connection alive.

On Linux, the keepalive procedures use three user-driven variables:

/proc/sys/net/ipv4/tcp_keepalive_time : How often TCP sends out keepalive messages when keepalive is enabled. Default is 7200 seconds. 


proc/sys/net/ipv4/tcp_keepalive_intvl : How frequent probes are retransmitted, when a probe isn't acknowledged. Default :75 seconds
 

proc/sys/net/ipv4/tcp_keepalive_probes : How many keepalive probes TCP will send, until it decides that the connection is broken. Default:9 seconds


AIX:  (.5 second)
no -a | grep tcp

tcp_keepcnt = 8
tcp_keepidle = 14400
tcp_keepinit = 150
tcp_keepintvl = 150


ORA-38754: FLASHBACK DATABASE not started; required redo log is not available

SQL> flashback database to restore point B4TEST;
flashback database to restore point B4TEST
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 5974114096519 to SCN 5974114096614
ORA-38761: redo log sequence 165 in thread 1, incarnation 1 could not be
accessed


The flashback rollback database by using flashback logs, then it recovers to desire point in time using archived log files. You need to have the required archive logs in the archive destination at the time of flashback then re-issue the flashback to restore point again.

ORA-03135: connection lost contact

SQLNET.EXPIRE_TIME specifies a time interval (in minutes) to send a probe to verify that client/server connections are active.  If you set a value greater than 0 ensures that connections are not left open indefinitely due to an abornal client termination.  If the probe finds a terminated connection, or a connection that is no longer in use, it returns an error, causing the server process to exit.


Default = 0
Minimum = 0
Recomended value = 10




Other parameters that can affect timeouts are (for example the value is 120 seconds):


SQLNET.INBOUND_CONNECT_TIMEOUT=120 (server's SQLNET.ORA)


INBOUND_CONNECT_TIMEOUT_<listener_name> =120 (server's LISTENER.ORA)


SQLNET.SEND_TIMEOUT = 300


SQLNET.RECV_TIMEOUT = 300


Restart LISTENER, LISTENER_SCAN1, LISTENER_SCAN2, LISTENER_SCAN3


If ORA-03135 occurs when trying to connect from ODP.NET, a workaround is to set Validate Connection = True. If Oracle version is 11.2.0.2 and you use SCAN, perhaps you should try to use vips instead and see if the error is resolved.

Indexes - few notes

DML activity can cause B*Tree indexes to develop sparsely populated blocks.  In an OLTP environment where there is DML activity, some indexes can develop areas that have few entries per block, so this can increase the number of I/Os required to do range scans.  If the number of delete leaf rows exceeds 20% of the size of the leaf rows, you need to rebuild the index or coalesce it (to combine adjacent leaf blocks when two or more adjacent blocks can fit into one block) to improve performance
SQL> analyze index APP_OWNER.ESP_PERMISSION_FK1 VALIDATE STRUCTURE;
Index_stats hols only one row statistics from the last index analyzed.
SQL > select name, del_lf_rows/lf_rows*100 reclaimable_space_pct from index_stats
NAME                                RECLAIMABLE_SPACE_PCT
------------------------------            ---------------------
ESP_PERMISSION_FK1                  20
General tips to create indexes to improve SQL performance. 
·         Use B*Tree indexes for join columns and columns frequently appearing in Where clause
·         Use composite indexes for columns frequently used together in where clause.  For large indexes, you should consider to use compressed key indexes
·         Use bitmap indexes for columns with few distinct values and large number of rows.  The index should be for read-only or read-mostly tables
·         Reverse key indexes can reduce block contention (buffer busy waits, enqueue index waits, etc. for concurrent DML and cannot be used for index range scan for queries
·         Compressed key replace the key value with a pointer to the actual key value in the block.   indexes don’t really hurt performance on DML or query.  It work well for large indexes

Execution Plan Utilities


1) Set statement_id
explain plan
set statement_id = 'plan_contract' for
select SYS_ID,CONTRACT_ID,LIAB_AMT
 from MYAPPS.contract
 where SYS_ID >10;

select plan_table_output from table(dbms_xplan.display('plan_table','plan_contract','serial'));

2) Querying the AWR to get the stored plans

select EMP_ID,CONTRACT_ID,MAXI_LIAB_AMT
from app.my_contract where  EMP_ID > 15

select sql_id, sql_text
from v$sql
where sql_text like '%EMP_ID > 15%';

SQL_ID
-------------
SQL_TEXT
--------------------------------------------------------------------------------
gkbhq6v23bdvm
select sql_id, sql_text from v$sql where sql_text like '%EMP_ID > 15%'


SQL> select plan_table_output from table(dbms_xplan.display_awr('gkbhq6v23bdvm'));

no rows selected ==> this indicate the statement has not yet been loaded to AWR.  You can force all SQLs to be captured by changing topnsql to MAXIMUM


SQL> exec dbms_workload_repository.modify_snapshot_settings(topnsql =>'MAXIMUM');
PL/SQL procedure successfully completed.

SQL> exec dbms_workload_repository.create_snapshot();
PL/SQL procedure successfully completed.

SQL> exec dbms_workload_repository.modify_snapshot_settings(topnsql =>'DEFAULT');
PL/SQL procedure successfully completed.

SQL> select plan_table_output from table(dbms_xplan.display_awr('gkbhq6v23bdvm'));

3) set autotrace on     
set autotrace traceonly
set autotrace traceonly explain
set autotrace traceonly statistics

Enable/Disable SQL Trace

For your own session: 
exec dbms_monitor.session_trace_enable;  exec dbms_session.set_sql_trace(true);
exec dbms_monitor.session_trace_disable; exec dbms_session.set_sql_trace(false);

You can also enable SQL for other session
Execute dbms_monitor.session_trace_enable(session_id, serial_id);
Execute dbms_monitor.session_trace_enable(session_id, serial_id, waits=>TRUE, binds=>TRUE);

tkprof command to format

Tkprof tracefile output sys=no sort=excpu print=3 (sort:  order in which to sort the cpu time spending 3  statements )

Changing ADDM Attributes


ADDM is controlled by the STATISTICS_LEVEL parameter.  By default, its setting is STATISTICS_LEVEL=TYPICAL.  ADDM analysis of I/O performance depends on the DBIO_EXPECTED parameter.  This parameter describes the expected performance of I/O subsystem.

To set the expected speed (default is 1000 ..10 ms)

SQL> exec DBMS_ADVISOR.set_default_task_parameter('ADDM','DBIO_EXPECTED',8000);

PL/SQL procedure successfully completed.

Select parameter_value, is_default
From dba_advisor_def_parameters
Where advisor_name = 'ADDM' and parameter_name = 'DBIO_EXPECTED'