Create Flash devices and Flash-based disk group


Exam your current flash-based cell disks

CellCLI> list celldisk attributes name, freeSpace where diskType=FlashDisk;
FD_00_cell  500M
FD_01_cell  500M
FD_02_cell  500M
FD_03_cell  500M

Create Smart Flash Cache with 1G

CellCLI> create flashcache all size=1024m;
Flash cache cell_FLASHCACHE successfully created


So, the available free space on all flash-based cell disks are:
CellCLI> list celldisk attributes name, freeSpace where diskType=FlashDisk;
FD_00_cell 192M
FD_01_cell 192M
FD_02_cell 192M
FD_03_cell 192M

CellCLI> create griddisk all flashdisk prefix=flash
GridDisk flash_FD_00_cell successfully created
GridDisk flash_FD_01_cell successfully created
GridDisk flash_FD_02_cell successfully created
GridDisk flash_FD_03_cell successfully created

Flash grid disks are created across the cells

CellCLI> list griddisk attributes name, size, ASMModeStatus where disktype=flashdisk;
flash_FD_00_cell 192M UNUSED
flash_FD_01_cell 192M UNUSED
flash_FD_02_cell 192M UNUSED
flash_FD_03_cell 192M UNUSED

SQL >  select path, header_status from v$asm_disk
  2* where path like '%flash%'
PATH HEADER_STATU
---------------------------------------------------------------- ------------
o/192.168.10.101/flash_FD_02_cell CANDIDATE
o/192.168.10.101/flash_FD_03_cell CANDIDATE
o/192.168.10.101/flash_FD_01_cell CANDIDATE
o/192.168.10.101/flash_FD_00_cell CANDIDATE

Create grid disks and ASM diskgroup

Prepare grid disks- Create set of griddisks on all available hard disks space cell disks

CellCLI> list griddisk
 reco_CD_disk07_cell  active
 reco_CD_disk08_cell  active
 reco_CD_disk09_cell  active
 reco_CD_disk10_cell  active
 reco_CD_disk11_cell  active
 reco_CD_disk12_cell  active

CellCLI> create griddisk all harddisk prefix=data2, size=100m;
Cell disks were skipped because they had no freespace for grid disks: CD_disk07_cell, CD_disk08_cell, CD_disk09_cell, CD_disk10_cell, CD_disk11_cell, CD_disk12_cell.
GridDisk data2_CD_disk01_cell successfully created
GridDisk data2_CD_disk02_cell successfully created
GridDisk data2_CD_disk03_cell successfully created
GridDisk data2_CD_disk04_cell successfully created
GridDisk data2_CD_disk05_cell successfully created
GridDisk data2_CD_disk06_cell successfully created

CellCLI> list griddisk;
 data2_CD_disk01_cell  active
 data2_CD_disk02_cell  active
 data2_CD_disk03_cell  active
 data2_CD_disk04_cell  active
 data2_CD_disk05_cell  active
 data2_CD_disk06_cell  active
 reco_CD_disk07_cell   active
 reco_CD_disk08_cell   active
 reco_CD_disk09_cell   active
 reco_CD_disk10_cell   active
 reco_CD_disk11_cell   active
 reco_CD_disk12_cell   active

CellCLI> list griddisk attributes name, size, ASMModeStatus
 data2_CD_disk01_cell  96M   UNUSED
 data2_CD_disk02_cell  96M   UNUSED
 data2_CD_disk03_cell  96M   UNUSED
 data2_CD_disk04_cell  96M   UNUSED
 data2_CD_disk05_cell  96M   UNUSED
 data2_CD_disk06_cell  96M   UNUSED
 reco_CD_disk07_cell   448M  ONLINE
 reco_CD_disk08_cell   448M  ONLINE
 reco_CD_disk09_cell   448M  ONLINE
 reco_CD_disk10_cell   448M  ONLINE
 reco_CD_disk11_cell   448M  ONLINE
 reco_CD_disk12_cell   448M  ONLINE

Create ASM diskgroup with AU_SIZE=4M as recommended value

SQL> create diskgroup DATA external redundancy disk 'o/*/data2*' attribute 'compatible.rdbms' = '11.2.0.0.0', 'compatible.asm' = '11.2.0.0.0', 'cell.smart_scan_capable' = 'TRUE', 'au_size' = '4M';

Diskgroup created.


SQL> select name, state from v$asm_diskgroup;

NAME     STATE
-------------------------------- -----------
DATA     MOUNTED
FRA     MOUNTED


CellCLI> list griddisk attributes name, size, ASMModeStatus
 data2_CD_disk01_cell  96M   ONLINE
 data2_CD_disk02_cell  96M   ONLINE
 data2_CD_disk03_cell  96M   ONLINE
 data2_CD_disk04_cell  96M   ONLINE
 data2_CD_disk05_cell  96M   ONLINE
 data2_CD_disk06_cell  96M   ONLINE
 reco_CD_disk07_cell   448M  ONLINE
 reco_CD_disk08_cell   448M  ONLINE
 reco_CD_disk09_cell   448M  ONLINE
 reco_CD_disk10_cell   448M  ONLINE
 reco_CD_disk11_cell   448M  ONLINE
 reco_CD_disk12_cell   448M  ONLINE

Exadata configuration tasks cell and storage provisioning

To run the performance test on the cell, you use Calibrate:
CellCLI> alter cell shutdown services cellsrv
Stopping CELLSRV services... 
The SHUTDOWN of CELLSRV services was successful.
CellCLI> calibrate;
Calibration will take a few minutes...
Aggregate random read throughput across all hard disk luns: 139 MBPS
Aggregate random read throughput across all flash disk luns: 2720.05 MBPS
Aggregate random read IOs per second (IOPS) across all hard disk luns: 1052
Aggregate random read IOs per second (IOPS) across all flash disk luns: 143248
Controller read throughput: 5477.08 MBPS
Calibrating hard disks (read only) ...
Calibrating flash disks (read only, note that writes will be significantly slower) ...
…….
CALIBRATE stress test is now running...
Calibration has finished.

CellCLI> alter cell validate configuration;
Cell cell successfully altered

CellCLI> list celldisk where freespace !=0
 CD_disk01      normal

CellCLI> create griddisk all harddisk prefix=datagri


CellCLI> drop celldisk CD_disk01_cell;
CellDisk CD_disk01_cell successfully dropped

CellCLI> list celldisk

CellCLI> create celldisk all;
CellDisk CD_disk01  successfully created

CellCLI> list griddisk attributes name where asmdeactivationoutcome !='Yes'

CellCLI> alter griddisk all inactive

CellCLI> list griddisk attributes name where asmdeactivationoutcome !='Yes'


CellCLI> alter griddisk all inactive

To power off Exadata Storage Servers

shutdown -h -y now


CellCLI> list griddisk

CellCLI> list griddisk attributes name, asmmodestatus

Cellcli > alter grid disk all active

CellCLI> list griddisk

CellCLI> alter griddisk all active
CellCLI> list griddisk attributes name, asmmodestatus


CellCLI> list griddisk attributes name, asmmodestatus

CellCLI> list griddisk attributes name, asmmodestatus
CellCLI> list griddisk attributes name, asmmodestatus

CellCLI> list griddisk


Apply Database PSU 11.2.0.3.2

Download the latest OPatch version Version: 11.2.0.3.0

opatch versionOPatch Version: 11.2.0.3.0

Create OCM configuration file $ <ORACLE_HOME>/OPatch/ocm/bin/emocmrsp

sudo opatch auto /apps/oracle/software/11.2.0.3/psu -oh /apps/oracle/product/11.2.0.3/RACDB -ocmrf /tmp/ocm.rsp

Note:  /apps/oracle/software/11.2.0.3/psu is the PSU software location.

[sudo] password for oracle:
Executing /usr/bin/perl /apps/oracle/product/11.2.0.3/RACDB/OPatch/crs/patch112.pl -patchdir /apps/oracle/software/11.2.0.3 -patchn psu -oh /apps/oracle/product/11.2.0.3/RACDB -ocmrf /tmp/ocm.rsp -paramfile /apps/11.2.0/grid/crs/install/crsconfig_params
opatch auto log file location is /apps/oracle/product/11.2.0.3/RACDB/OPatch/crs/../../cfgtoollogs/opatchauto2012-07-15_14-26-50.log
Detected Oracle Clusterware install
Using configuration parameter file: /apps/11.2.0/grid/crs/install/crsconfig_params
patch /apps/oracle/software/11.2.0.3/psu/13696251/custom/server/13696251  apply successful for home  /apps/oracle/product/11.2.0.3/RACDB
patch /apps/oracle/software/11.2.0.3/psu/13696216  apply successful for home  /apps/oracle/product/11.2.0.3/RACDB

Exadata X2-2 Summary


Quarter        Half       Full
Servers 2 4 8
Cores 24 48 96
Storage Servers 3 7 14
Number of disks 36 84 168
SAS (Small and Fast)  TB 21 50 100
SATA (Big and slower) TB 72 168 336
Infiniband 2 3 3
IOPS Flash  375000 750000 1500000

How to shutdown Exadata Storage Server

It’s necessary to shutdown the cell  when performaing maintenance on the Exadata Storage Server.  It’s also important to verify that taking Exadata Storage server offline will not impact Oracle ASM disk group and database availability depends on the ASM redundancy and mirror copies of data.  Below are the high level of how to shutdown Exadata Storage Server

CellCLI> list griddisk attributes name where asmdeactivationoutcome !='Yes'
If any grid disks are returned (like below), then it’s not safe to take the storage offline
                data_CD_1_ancell05
                data_CD_2_ancell05
                data_CD_3_ancell05
                data_CD_4_ancell05
                data_CD_5_ancell05
                data_CD_6_ancell05
                data_FD_00_ancell05
                data_FD_01_ancell05
                data_FD_02_ancell05A
                data_FD_03_ancell05
                reco_CD_7_ancell05
                reco_CD_8_ancell05
                reco_CD_9_ancell05
                reco_CD_disk10_ancell05
                reco_CD_disk11_ancell05
                reco_CD_disk12_ancell05


Inactivate the grid disks before bring Exadata Storage server offline
CellCLI> alter griddisk all inactive
GridDisk data_CD_1_ancell05 successfully altered
GridDisk data_CD_2_ancell05 successfully altered
GridDisk data_CD_3_ancell05 successfully altered
GridDisk data_CD_4_ancell05 successfully altered
….
….

Cellcli > list griddisk where status != ‘inactive’
If all grid disks are inactive, you can power down the exadata storage server

To power off Exadata Storage Servers
shutdown -h -y now

Apply 11.2.0.3.1 Grid Infrastructure Patch Set Update (GI PSU)


+ASM1 > $ORACLE_HOME/OPatch/ocm/bin/emocmrsp

Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit
http://www.oracle.com/support/policies.html for details.
Email address/User Name:
ashley.nguyen@dbalab.com
Provide your My Oracle Support password to receive security updates via your My Oracle Support account.
Password (optional):          
The OCM configuration response file (ocm.rsp) was successfully created.

mv ocm.rsp /tmp

sudo -u root opatch auto /apps/oracle/software/psu.11.2.0.3.1 -oh /apps/11.2.0/grid -ocmrf /tmp/ocm.rsp

<===Failed ==>

If the opatch auto failed, you may need to re-run the command again.  See the example below

Re-run it and enter yes if this is not a shared home.

sudo -u root opatch auto /apps/oracle/software/psu.11.2.0.3.1 -oh /apps/11.2.0/grid -ocmrf /tmp/ocm.rsp

Executing /usr/bin/perl /apps/11.2.0/grid/OPatch/crs/patch112.pl -patchdir /apps/oracle/software -patchn psu.11.2.0.3.1 -oh /apps/11.2.0/grid -ocmrf /tmp/ocm.rsp -paramfile /apps/11.2.0/grid/crs/install/crsconfig_params
opatch auto log file location is /apps/11.2.0/grid/OPatch/crs/../../cfgtoollogs/opatchauto2012-06-04_17-41-29.log
Detected Oracle Clusterware install
Using configuration parameter file: /apps/11.2.0/grid/crs/install/crsconfig_params

Unable to determine if /apps/11.2.0/grid is shared oracle home
Enter 'yes' if this is not a shared home or if the prerequiste actions are performed to patch this shared home (yes/no):yes
Successfully unlock /apps/11.2.0/grid
patch /apps/oracle/software/psu.11.2.0.3.1/13348650  apply successful for home  /apps/11.2.0/grid
patch /apps/oracle/software/psu.11.2.0.3.1/13343438  apply successful for home  /apps/11.2.0/grid
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9312: Existing ADVM/ACFS installation detected.
ACFS-9314: Removing previous ADVM/ACFS installation.
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9154: Loading 'oracleoks.ko' driver.
ACFS-9154: Loading 'oracleadvm.ko' driver.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4123: Oracle High Availability Services has been started.

Deinstalling the Grid Infrastructure Software

You don’t have to manually remove directories.  There is a deinstall script in the grid home directory.  The deinstall checks and prompt questions to confirm before it performs a clean deinstallation (see below).

cd /apps/grid/11.2.0/grid/deinstall

LADB1 - oracle: ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2012-05-23_02-25-20PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /apps/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /apps/oracle
Checking for existence of central inventory location /apps/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /apps/11.2.0/grid
The following nodes are part of this cluster: dbsrvl100,dbsrvl101
Checking for sufficient temp space availability on node(s) : 'dbsrvl100,dbsrvl101'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2012-05-23_02-25-20PM/logs//crsdc.log

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2012-05-23_02-25-20PM/logs/netdc_check2012-05-23_02-25-48-PM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2012-05-23_02-25-20PM/logs/asmcadc_check2012-05-23_02-37-14-PM.log

Automatic Storage Management (ASM) instance is detected in this Oracle home /apps/11.2.0/grid.
ASM Diagnostic Destination : /apps/oracle
ASM Diskgroups : +GI
ASM diskstring : /dev/oracleasm/disks
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and it's contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).
If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you  want to modify above information (y|n) [n]: y
Specify the ASM Diagnostic Destination [/apps/oracle]:
Specify the diskstring [/dev/oracleasm/disks]:
Specify the diskgroups that are managed by this ASM instance [+GI]:

De-configuring ASM will drop the diskgroups at cleanup time. Do you want deconfig tool to drop the diskgroups y|n [y]: y


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /apps/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:dbsrvl100,dbsrvl101
Oracle Home selected for deinstall is: /apps/11.2.0/grid
Inventory Location where the Oracle home registered is: /apps/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2012-05-23_02-25-20PM/logs/deinstall_deconfig2012-05-23_02-25-25-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2012-05-23_02-25-20PM/logs/deinstall_deconfig2012-05-23_02-25-25-PM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2012-05-23_02-25-20PM/logs/asmcadc_clean2012-05-23_02-38-32-PM.log
ASM Clean Configuration START
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2012-05-23_02-25-20PM/logs/netdc_clean2012-05-23_02-40-15-PM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
    Stopping listener: LISTENER
    Listener stopped successfully.
    Unregistering listener: LISTENER
    Listener unregistered successfully.
Listener de-configured successfully.

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "dbsrvl101".

/tmp/deinstall2012-05-23_02-25-20PM/perl/bin/perl -I/tmp/deinstall2012-05-23_02-25-20PM/perl/lib -I/tmp/deinstall2012-05-23_02-25-20PM/crs/install /tmp/deinstall2012-05-23_02-25-20PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-05-23_02-25-20PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Run the following command as the root user or the administrator on node "dbsrvl100".

/tmp/deinstall2012-05-23_02-25-20PM/perl/bin/perl -I/tmp/deinstall2012-05-23_02-25-20PM/perl/lib -I/tmp/deinstall2012-05-23_02-25-20PM/crs/install /tmp/deinstall2012-05-23_02-25-20PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-05-23_02-25-20PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

Press Enter after you finish running the above commands

<----------------------------------------

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/apps/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/apps/11.2.0/grid' on the local node : Done

Delete directory '/apps/oraInventory' on the local node : Done

The Oracle Base directory '/apps/oracle' will not be removed on local node. The directory is not empty.

Detach Oracle home '/apps/11.2.0/grid' from the central inventory on the remote nodes 'dbsrvl101' : Done

Delete directory '/apps/11.2.0/grid' on the remote nodes 'dbsrvl101' : Done

Delete directory '/apps/oraInventory' on the remote nodes 'dbsrvl101' : Done

The Oracle Base directory '/apps/oracle' will not be removed on node 'dbsrvl101'. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2012-05-23_02-25-20PM' on node 'dbsrvl100'
Clean install operation removing temporary directory '/tmp/deinstall2012-05-23_02-25-20PM' on node 'dbsrvl101'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "dbsrvl101"
Oracle Clusterware is stopped and successfully de-configured on node "dbsrvl100"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/apps/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/apps/11.2.0/grid' on the local node.
Successfully deleted directory '/apps/oraInventory' on the local node.
Successfully detached Oracle home '/apps/11.2.0/grid' from the central inventory on the remote nodes 'dbsrvl101'.
Successfully deleted directory '/apps/11.2.0/grid' on the remote nodes 'dbsrvl101'.
Successfully deleted directory '/apps/oraInventory' on the remote nodes 'dbsrvl101'.
Oracle Universal Installer cleanup was successful.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'dbsrvl100,dbsrvl101' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'dbsrvl100,dbsrvl101' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

strace - trace system calls and signals

 strace - trace system calls and signals.  It's a useful tool for diagnosis, debugging processes on Linux. 

To install:
 - root: yum install strace
Loaded plugins: rhnplugin, security
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package strace.x86_64 0:4.5.18-11.el5_8 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
 Package     Arch        Version                Repository                 Size
================================================================================
Installing:
 strace      x86_64      4.5.18-11.el5_8        rhel-x86_64-server-5      177 k
Transaction Summary
================================================================================
Install       1 Package(s)
Upgrade       0 Package(s)
Total download size: 177 k
Is this ok [y/N]: y
Downloading Packages:
strace-4.5.18-11.el5_8.x86_64.rpm                                                                                                                           | 177 kB     00:00    
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing     : strace  
 
  Installed:
    strace.x86_64 0:4.5.18-11.el5_8                                                                                                                                                 
 
Complete!

Some useful parameters:

-tt  If given twice, the time printed will include the microseconds.
-T  Show the time spent in system calls. This records the time difference between the beginning and the end of each system call.
-o  filename : Write the trace output to the file filename rather than to screen (stderr).
-p  PID   Attach to the process with the process ID pid and begin tracing. The trace may be terminated at any time by a keyboard interrupt signal (hit CTRL-C). strace will respond by detaching itself from the traced process(es) leaving it (them) to continue running. Multiple -p options can be used to attach to up to 32 processes in addition to command (which is optional if at least one -p option is given).

-s SIZE  Specify the maximum string size to print (the default is 32).

Examples: 

To trace an oracle process and see what it's doing..
strace -p 28446 -s 100 -o /tmp/orammnl.txt

to trace the open and read system calls
strace -e  trace=open,read -p 28446 -o /tmp/orammnl.debug

to trace the process spends time on system calls
strace -ttT -p 15878 -o /tmp/tracetiming.out

14:54:26.232457 open("/proc/15914/stat", O_RDONLY) = 26 <0.000028>
14:54:26.232584 read(26, "15914 (oracle) S 1 15914 15914 0"..., 999) = 243 <0.000049>
14:54:26.232884 close(26)               = 0 <0.000019>
14:54:26.233002 open("/proc/15916/stat", O_RDONLY) = 26 <0.000027>
14:54:26.233127 read(26, "15916 (oracle) S 1 15916 15916 0"..., 999) = 240 <0.000046>
14:54:26.233269 close(26)               = 0 <0.000019>
...
At 14:54:26.233127, it took 0.46 milisecond to read 240 bytes.
 

RDA TLsecure tool

RDA TLsecure tool can help identify potential database security risk.  See the list below for checking..

ovmrac01:/apps/oracle/product/11.2.0.3/RACDB/rda
LADB01 - oracle: ./rda.pl -vT secure
        Testing ...
Identification of Potential Security Risks ...
Identification of Potential Security Risks
  1   Users with a default password (11g)
  2   Users with a well known password
  3   Users not visible in dba_users
  4   Operating system authenticated user names with a password
  5   Users with SYSTEM as default tablespace
  6   Users and privileges from gv$pwfile_users
  7   Privileges not granted by their owner
  8   Listeners without a password
  9   Listeners with local operating system authentication (10g and later)
  10  Listeners modifiable at runtime
  11  AUTHENTICATION_SERVICES value in sqlnet.ora files
  12  Oracle executables owned by different users of a same group (UNIX only)
  *   Run all checks
Enter a menu item or . to end
> *