Exadata Smart Flash Cache

Exadata Smart Flash Cache: Is a write-through cache, disk cache on exadata storage server. It caches data for all instances that access the storage cell.

Different types of I/O from database:

Caching: 
Control file reads and writes are cached
File header reads and writes are cached
Data Blocks and Index blocks are cached

Skip Caching:
I/Os mirror copies, backups, data pump, tablespace formating, resistant to tables scan are skipped.

For a full rack, you cannot use more than 4.3TB (about 80% total flash available for smart flash) for "KEEP".  In some situations, an object with "KEEP" is left flash, it could be that the object is not accessed in 48 hours, block not accesses in last 24 hrs or object dropped or truncated.


To create Exadata Smart Flash Cache:
CellCLI> list celldisk attributes name, disktype, size where name like 'FD.*'
FD_00_cell FlashDisk 496M
FD_01_cell FlashDisk 496M
FD_02_cell FlashDisk 496M
FD_03_cell FlashDisk 496M

CellCLI> create flashcache all size=1000m;
Flash cache cell_FLASHCACHE successfully created

CellCLI> list flashcache detail;
name: cell_FLASHCACHE
cellDisk: FD_01_cell,FD_03_cell,FD_02_cell,FD_00_cell
creationTime: 2013-01-28T16:38:18-08:00
degradedCelldisks:
effectiveCacheSize: 960M
id: 2f225eb8-d9e3-41a6-a7d8-691dd04809b5
size: 960M
status: normal

CellCLI> list griddisk
data2_CD_disk01_cell active
data2_CD_disk02_cell active
data2_CD_disk03_cell active
data2_CD_disk04_cell active
data2_CD_disk05_cell active
data2_CD_disk06_cell active
reco_CD_disk07_cell active
reco_CD_disk08_cell active
reco_CD_disk09_cell active
reco_CD_disk10_cell active

CellCLI> create griddisk all flashdisk prefix=flash;
GridDisk flash_FD_00_cell successfully created
GridDisk flash_FD_01_cell successfully created
GridDisk flash_FD_02_cell successfully created
GridDisk flash_FD_03_cell successfully created

CellCLI> list griddisk
data2_CD_disk01_cell active
data2_CD_disk02_cell active
data2_CD_disk03_cell active
data2_CD_disk04_cell active
data2_CD_disk05_cell active
data2_CD_disk06_cell active
flash_FD_00_cell active
flash_FD_01_cell active
flash_FD_02_cell active
flash_FD_03_cell active
reco_CD_disk07_cell active
reco_CD_disk08_cell active
reco_CD_disk09_cell active
reco_CD_disk10_cell active

Objects will be cached in the Exadata Smart Flash Cache based on the automatic caching policy. But you can control the policy of a database objects using these three attributes: NONE (never cache), DEFAULT: automatic caching mechanism, KEEP: more aggressive caching.

SQL> alter table anguyen.mycustomers storage (cell_flash_cache keep);

Table altered.

Hybrid Columnar Compression, Column projection, Predicate Filtering, Storage Indexes, Bloom filters

Hybrid Columnar Compression:
Tables are organized into compression units (CUs) that are larger than database blocks. Compression Unit is usually arround 32k. With compression unit method, the data is organized by column that similar values close together so this improving the compression. There are 2 mode of compression:

1) Query Mode - 10x saving
2) Archive Mode - 15x to 50x savings


Column projection: Returns only columns of interest between the database tier and storage tiers.

Predicate Filtering:  Returns only rows of interest to the database tier.

Storage Indexes:  Maintain a max and min value for each 1 MB disk storage unit, up to eight columns of a table.  It eliminates time to read from the storage servers by eliminating no-matching rows as fre-filter like partitioning mechanism.

To disable / enable storage indexes:  alter system set "_kcfis_storageidx_disabled"=true / false

To disable/enable smart scan:  alter session set cell_offload_processing=false;

To  use the effectiveness of smart scan, You run the following query:

select sql_ID, physical_read_bytes, physical_write_bytes, io_interconnect_bytes eligible, io_cell_offload_eligible_bytes, io_cell_uncompressed_bytes, io_cell_offload_returned_bytes, optimized_phy_read_requests
from v$sql
where sql_text like 'your query here %';


Simple joins (Bloom Filters):
Reduce traffic tween parallel query slaves processes mostly in RAC.

To disable / enable bloom filter off loading
alter session set "_bloom_predicate_pushdown_to_storage"=false / true;


Exadata storage objects

Hierarch of Exadata Storage Objects:

DISK-> LUN-> CELLDISK -> GRIDDISK-> ASM DISK
 
 
celladmin@cell ~]$ cellcli
CellCLI: Release 11.2.2.1.0 - Production on Thu Jan 10 15:43:39 PST 2013

Copyright (c) 2007, 2009, Oracle.  All rights reserved.
Cell Efficiency Ratio: 17M

CellCLI> list cell detail
         name:                   cell
         bmcType:                absent
         cellVersion:            OSS_MAIN_LINUX_101005
         cpuCount:               1
         fanCount:               1/1
         fanStatus:              normal
         id:                     3c4bc34b-9acd-4a98-a910-c660c9c76c03
         interconnectCount:      1
         interconnect1:          eth0
         iormBoost:              0.0
         ipaddress1:             192.168.10.101/24
         kernelVersion:          2.6.18-194.el5
         makeModel:              Fake hardware
         metricHistoryDays:      7
         offloadEfficiency:      17.3M
         powerCount:             1/1
         powerStatus:            normal
         status:                 online
         temperatureReading:     0.0
         temperatureStatus:      normal
         upTime:                 3 days, 6:22
         cellsrvStatus:          running
         msStatus:               running
         rsStatus:               running
CellCLI> list physicaldisk
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01       /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01       normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02       /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02       normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03       /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03       normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04       /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04       normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12        normal

CellCLI> list lun
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01       /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01       normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02       /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02       normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03       /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03       normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04       /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04       normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12        normal

CellCLI> list lun where disktype=harddisk
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11        normal
         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12        normal


CellCLI> list lun where name like '.*disk01' detail
         name:                   /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
         cellDisk:               CD_disk01_cell
         deviceName:             /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
         diskType:               HardDisk
         id:                     /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
         isSystemLun:            FALSE
         lunAutoCreate:          FALSE
         lunSize:                500M
         physicalDrives:         /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
         raidLevel:              "RAID 0"
         status:                 normal

CellCLI> list celldisk CD_disk01_cell detail
         name:                   CD_disk01_cell
         comment:                
         creationTime:           2012-09-04T18:12:55-07:00
         deviceName:             /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
         devicePartition:        /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
         diskType:               HardDisk
         errorCount:             0
         freeSpace:              0
         id:                     00000139-93fc-a83d-0000-000000000000
         interleaving:           none
         lun:                    /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
         raidLevel:              "RAID 0"
         size:                   496M
         status:                 normal

CellCLI> list griddisk where celldisk=CD_disk01_cell detail
         name:                   data2_CD_disk01_cell
         availableTo:            
         cellDisk:               CD_disk01_cell
         comment:                
         creationTime:           2012-09-04T23:46:14-07:00
         diskType:               HardDisk
         errorCount:             0
         id:                     00000139-952d-dd5d-0000-000000000000
         offset:                 48M
         size:                   448M
         status:                 active

SQL> select name, path, state, total_mb from v$asm_disk
  2  where name like '%DATA2_CD_DISK01_CELL%';

NAME                           PATH                                                             STATE      TOTAL_MB
------------------------------ ---------------------------------------------------------------- -------- ----------
DATA2_CD_DISK01_CELL           o/192.168.10.101/data2_CD_disk01_cell                            NORMAL          448

SQL> select a.name disk, b.name diskgroup
  2  from v$asm_disk a, v$asm_diskgroup b
  3  where b.group_number = a.group_number
  4  and a.name like '%DATA2_CD_DISK01_CELL%';

DISK                           DISKGROUP
------------------------------ ------------------------------
DATA2_CD_DISK01_CELL           DATA

CELLSRV, MS, RS

CELLSRV:  Is a multithreaded server.  The primary Exadata Storage software component and provides the majority of Exadata storage services.  It communicates with Oracle Database to serve simple block requests like database buffer cache reads and Sart Scan requests like tablespaces with projections and filters.  CELLSRV also implements I/ORM and collect statistics relating to its operation.

MS: provides Exadata cell management and configuration.  It works with CellCLI.  It also responsible for sending alerts and collects some statistics in addition to those collected by CELLSRV.

Restart Server (RS) is used to startup / shutdown the Cell Server (CELLSRV) and Management server (MS).  It also monitors these services to check whether they need to be restarted.
 

Configure RMAN to automatically purge archive logs after applying on the standby database


RMAN can automatically purge archivelogs from the FRA after archivelogs are applied to the standby database.

rman target /
RMAN > configure archivelog deletion poliyc to applied on all standby;

select a.thread#, a.sequence#, a.applied
from v$archived_log a, v$database d
where a.activation# = d.activation#
and a.applied='YES'
order by a.thread#

RMAN > show retention policy

RMAN > report obsolete;

When there is a space pressure in FRA, the archivelogs will be deleted and you can see it from alert_INSTANCE.log