ORA-00312, ORA-00338
ORA-00312: online log 5 thread 3: '+DATA_CD501/oltp/onlinelog/group_5.264.799435405'
ORA-00338: log 5 of thread 3 is more recent than control file
ORA-00312: online log 5 thread 3: '+DATA_CD501/oltp/onlinelog/group_5.265.799435401'
ORA-00338: log 5 of thread 3 is more recent than control file
ORA-00312: online log 5 thread 3: '+DATA_CD501/oltp/onlinelog/group_5.264.799435405'
ORA-00338: log 5 of thread 3 is more recent than control file
Bug 12770551.
Please go through below doc for reference :
Bug 12770551 - Frequent ORA-338 during controlfile restore with ASYNC Data Guard (Doc ID 12770551.8)
Check out the patch for download
https://updates.oracle.com/Orion/PatchDetails/process_form?patch_num=12770551
How to trace a data pump process
1) Run the normal export data pump or you can prepare a parfile something like this..
FULL=Y
DUMPFILE=EXPDP_DUMPDIR:expdp_MYDB_ROWS_FULL%U_012413:1800.dmp
CLUSTER=N
LOGFILE=EXPDP_LOGDIR:expdp_MYDB_ROWS_FULL_012413:1800.log
PARALLEL=4
CONTENT=ALL
2. Connect to sqlplus AS SYSDBA user and obtain Data Pump process information:
set lines 150 pages 100 numwidth 7
set time on
col program for a38
col username for a10
col spid for a7
select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,
s.status, s.username, d.job_name, p.spid, s.serial#, p.pid
from v$session s, v$process p, dba_datapump_sessions d
where p.addr=s.paddr and s.saddr=d.saddr;
3. From the same sqlplus session and using the information from the above step, attach to the Datapump Worker (DW01) process:
SQL> oradebug setospid <spid_of_dw_process>
4. Run the following command every few minutes
SQL> oradebug current sql
There are master process trace and worker process trace.
High I/O and CPU waits during Oracle data pump expdp is slow / hung
If you run into a situation that the data pump is taking
such a long time /hanging and consuming all resources (meory, CPU, I/O wait,
etc.), you should check your paging/swaping related parameters in
/etc/sysctl.conf.
Example:
- oracle: vmstat 10 10
procs -----------memory---------- ---swap-- -----io----
--system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 14 930336 673184 84964 51242600 0 0 273 113 0 0 2 4 93 1 0
27 15 930328 628948 85332 51281352 1 0 7753 3403 1448 16757 1 69
18 11 0
54 16 930340 628852 83536 51264948 11 0 342 1832 1398 14427 0 98
0 1 0
34 18 930232 641848 85048 51256972 24 1 2177 2033 1989 21538 0
89 4 7 0
0 14 929916 659768 85684 51253524 37 7 1527 1642 1480 15844 1 75
12 12 0
6 14 929944 629840 86188 51273780 381 1 4967 3622 2092 24876 1
58 25 16 0
10 14 930120 629916 85744 51275736 3 0 4663 3254 1568 18072 1 61
20 18 0
19 14 930352 629204 83520 51254128 11 22 363 1594 1267 15785 0
79 8 13 0
9 15 930436 667332 84216 51242292 18 23 921 1606 1525 14833 0 96
2 2 0
36 17 930396 629960 86064 51274904 366 1 5928 2564 2142 24199 1
28 46 25 0
So we added the below settings to the /etc/sysctl.conf file
to reduce the paging/swapping. After
making the changes in /etc/sysctl.conf, the data pump job was finished 1 hr
(instead of 12 hrs, hang, and consumed server resources). You should work with your system
administrator or Redhat Support to make sure these parameters are appropriate
for your enviornment and monitor the results.
#BEGIN#
#Reduce swapping:
vm.swappiness = 10
#Maximum percentage of active memory that can have dirty pages:
vm.dirty_background_ratio=3
#Maximum percentage of total memory that can have dirty pages:
vm.dirty_ratio=15
#How long data can be in page cache before being expired:
vm.dirty_expire_centisecs=500
#How often pdflush is activated to clean dirty pages:
vm.dirty_writeback_centisecs=100
#END#
Exadata I/O Resource Manager (IORM) and Database Resource Manager (DBRM)
I/O Resource Manager (IORM) provides a way to manage I/O resource bandwidth for multiple databases in an exadata environment. It helps you to priority I/O of a production database over test databases, divide I/O resources among different classes of queries (DSS, reporting) or non-critical jobs ETL, etc.
With the traditional solution, DBAs put critical databases on dedicated storage , add more disks, or reschedule non-critical tasks at off-peak/after business hours, which are expensive and tedious solutions. To solve this issue, Exadata's IORM manges I/Os based on your prioritization and resource usage regulation based on user's priority, consumer groups and resource plan.
1) Create Consumer Groups for each type of similar workload and create rules to dynamically map sessions to consumer groups based on session attributes
2) Create Resource Plans: 3 plans
Ratio-Based Plan: OLTP 60%, DSS 30%, Maintenance 10%
Priority-Based Plan: Priority 1: OLTP, Priority 2: DSS, Priority 3 : Maintenance
Hybrid Plan: Level 1 Level 2
OLTP 80 %
DSS 10 90%
Maintenance 5%
Configuring Consumer groups and Plans using dbms_resource_manager package, resource manager enterprise manager.
You can create multiple plans ( day plan, night plan, maintenance plan), but only one plan can be enabled at a time. To set plan you can set the resource_manager_plan parameter or use the job scheduler to enable the plan.
IORM allow multiple databases to share exadata storage effectively by not allow the test databases to impact production database and share the resource among production databases effectively.
Inter-database plan: allocates resources for each database and the plans can be configure/enable via cellcli. Exadata use Inter-database plans and intra-database(based on consumer groups from that database) to prioritize the production databases over Standby, QA and test databases.
| Database | Level 1 | Level 2 | Level 3 |
| Production OLTP | 70% | ||
| Production Reporting | 30 | ||
| DR OLTP Standby | 100 | ||
| QA database | 70 | ||
| Development database | 30 |
Category Plan: Category is an attribute of each consumer group. The plan allocates resources for each category to manage multiple workload types
by using category plan, inter-database plan and intra-database plan.
Exadata Smart Flash Cache
Exadata Smart Flash Cache: Is a write-through cache, disk cache on exadata storage server. It caches data for all instances that access the storage cell.
Different types of I/O from database:
Caching:
Control file reads and writes are cached
File header reads and writes are cached
Data Blocks and Index blocks are cached
Skip Caching:
I/Os mirror copies, backups, data pump, tablespace formating, resistant to tables scan are skipped.
For a full rack, you cannot use more than 4.3TB (about 80% total flash available for smart flash) for "KEEP". In some situations, an object with "KEEP" is left flash, it could be that the object is not accessed in 48 hours, block not accesses in last 24 hrs or object dropped or truncated.
To create Exadata Smart Flash Cache:
CellCLI> list celldisk attributes name, disktype, size where name like 'FD.*'
FD_00_cell FlashDisk 496M
FD_01_cell FlashDisk 496M
FD_02_cell FlashDisk 496M
FD_03_cell FlashDisk 496M
CellCLI> create flashcache all size=1000m;
Flash cache cell_FLASHCACHE successfully created
CellCLI> list flashcache detail;
name: cell_FLASHCACHE
cellDisk: FD_01_cell,FD_03_cell,FD_02_cell,FD_00_cell
creationTime: 2013-01-28T16:38:18-08:00
degradedCelldisks:
effectiveCacheSize: 960M
id: 2f225eb8-d9e3-41a6-a7d8-691dd04809b5
size: 960M
status: normal
CellCLI> list griddisk
data2_CD_disk01_cell active
data2_CD_disk02_cell active
data2_CD_disk03_cell active
data2_CD_disk04_cell active
data2_CD_disk05_cell active
data2_CD_disk06_cell active
reco_CD_disk07_cell active
reco_CD_disk08_cell active
reco_CD_disk09_cell active
reco_CD_disk10_cell active
CellCLI> create griddisk all flashdisk prefix=flash;
GridDisk flash_FD_00_cell successfully created
GridDisk flash_FD_01_cell successfully created
GridDisk flash_FD_02_cell successfully created
GridDisk flash_FD_03_cell successfully created
CellCLI> list griddisk
data2_CD_disk01_cell active
data2_CD_disk02_cell active
data2_CD_disk03_cell active
data2_CD_disk04_cell active
data2_CD_disk05_cell active
data2_CD_disk06_cell active
flash_FD_00_cell active
flash_FD_01_cell active
flash_FD_02_cell active
flash_FD_03_cell active
reco_CD_disk07_cell active
reco_CD_disk08_cell active
reco_CD_disk09_cell active
reco_CD_disk10_cell active
Objects will be cached in the Exadata Smart Flash Cache based on the automatic caching policy. But you can control the policy of a database objects using these three attributes: NONE (never cache), DEFAULT: automatic caching mechanism, KEEP: more aggressive caching.
SQL> alter table anguyen.mycustomers storage (cell_flash_cache keep);
Table altered.
Different types of I/O from database:
Caching:
Control file reads and writes are cached
File header reads and writes are cached
Data Blocks and Index blocks are cached
Skip Caching:
I/Os mirror copies, backups, data pump, tablespace formating, resistant to tables scan are skipped.
For a full rack, you cannot use more than 4.3TB (about 80% total flash available for smart flash) for "KEEP". In some situations, an object with "KEEP" is left flash, it could be that the object is not accessed in 48 hours, block not accesses in last 24 hrs or object dropped or truncated.
To create Exadata Smart Flash Cache:
CellCLI> list celldisk attributes name, disktype, size where name like 'FD.*'
FD_00_cell FlashDisk 496M
FD_01_cell FlashDisk 496M
FD_02_cell FlashDisk 496M
FD_03_cell FlashDisk 496M
CellCLI> create flashcache all size=1000m;
Flash cache cell_FLASHCACHE successfully created
CellCLI> list flashcache detail;
name: cell_FLASHCACHE
cellDisk: FD_01_cell,FD_03_cell,FD_02_cell,FD_00_cell
creationTime: 2013-01-28T16:38:18-08:00
degradedCelldisks:
effectiveCacheSize: 960M
id: 2f225eb8-d9e3-41a6-a7d8-691dd04809b5
size: 960M
status: normal
CellCLI> list griddisk
data2_CD_disk01_cell active
data2_CD_disk02_cell active
data2_CD_disk03_cell active
data2_CD_disk04_cell active
data2_CD_disk05_cell active
data2_CD_disk06_cell active
reco_CD_disk07_cell active
reco_CD_disk08_cell active
reco_CD_disk09_cell active
reco_CD_disk10_cell active
CellCLI> create griddisk all flashdisk prefix=flash;
GridDisk flash_FD_00_cell successfully created
GridDisk flash_FD_01_cell successfully created
GridDisk flash_FD_02_cell successfully created
GridDisk flash_FD_03_cell successfully created
CellCLI> list griddisk
data2_CD_disk01_cell active
data2_CD_disk02_cell active
data2_CD_disk03_cell active
data2_CD_disk04_cell active
data2_CD_disk05_cell active
data2_CD_disk06_cell active
flash_FD_00_cell active
flash_FD_01_cell active
flash_FD_02_cell active
flash_FD_03_cell active
reco_CD_disk07_cell active
reco_CD_disk08_cell active
reco_CD_disk09_cell active
reco_CD_disk10_cell active
Objects will be cached in the Exadata Smart Flash Cache based on the automatic caching policy. But you can control the policy of a database objects using these three attributes: NONE (never cache), DEFAULT: automatic caching mechanism, KEEP: more aggressive caching.
SQL> alter table anguyen.mycustomers storage (cell_flash_cache keep);
Table altered.
Hybrid Columnar Compression, Column projection, Predicate Filtering, Storage Indexes, Bloom filters
Hybrid Columnar Compression:
Tables are organized into compression units (CUs) that are larger than database blocks. Compression Unit is usually arround 32k. With compression unit method, the data is organized by column that similar values close together so this improving the compression. There are 2 mode of compression:
1) Query Mode - 10x saving
2) Archive Mode - 15x to 50x savings
Column projection: Returns only columns of interest between the database tier and storage tiers.
Predicate Filtering: Returns only rows of interest to the database tier.
Storage Indexes: Maintain a max and min value for each 1 MB disk storage unit, up to eight columns of a table. It eliminates time to read from the storage servers by eliminating no-matching rows as fre-filter like partitioning mechanism.
To disable / enable storage indexes: alter system set "_kcfis_storageidx_disabled"=true / false
To disable/enable smart scan: alter session set cell_offload_processing=false;
To use the effectiveness of smart scan, You run the following query:
select sql_ID, physical_read_bytes, physical_write_bytes, io_interconnect_bytes eligible, io_cell_offload_eligible_bytes, io_cell_uncompressed_bytes, io_cell_offload_returned_bytes, optimized_phy_read_requests
from v$sql
where sql_text like 'your query here %';
Simple joins (Bloom Filters):
Reduce traffic tween parallel query slaves processes mostly in RAC.
To disable / enable bloom filter off loading
alter session set "_bloom_predicate_pushdown_to_storage"=false / true;
Tables are organized into compression units (CUs) that are larger than database blocks. Compression Unit is usually arround 32k. With compression unit method, the data is organized by column that similar values close together so this improving the compression. There are 2 mode of compression:
1) Query Mode - 10x saving
2) Archive Mode - 15x to 50x savings
Column projection: Returns only columns of interest between the database tier and storage tiers.
Predicate Filtering: Returns only rows of interest to the database tier.
Storage Indexes: Maintain a max and min value for each 1 MB disk storage unit, up to eight columns of a table. It eliminates time to read from the storage servers by eliminating no-matching rows as fre-filter like partitioning mechanism.
To disable / enable storage indexes: alter system set "_kcfis_storageidx_disabled"=true / false
To disable/enable smart scan: alter session set cell_offload_processing=false;
To use the effectiveness of smart scan, You run the following query:
select sql_ID, physical_read_bytes, physical_write_bytes, io_interconnect_bytes eligible, io_cell_offload_eligible_bytes, io_cell_uncompressed_bytes, io_cell_offload_returned_bytes, optimized_phy_read_requests
from v$sql
where sql_text like 'your query here %';
Simple joins (Bloom Filters):
Reduce traffic tween parallel query slaves processes mostly in RAC.
To disable / enable bloom filter off loading
alter session set "_bloom_predicate_pushdown_to_storage"=false / true;
Exadata storage objects
Hierarch of Exadata Storage Objects:
DISK-> LUN-> CELLDISK -> GRIDDISK-> ASM DISK
DISK-> LUN-> CELLDISK -> GRIDDISK-> ASM DISK
celladmin@cell ~]$ cellcli
CellCLI: Release 11.2.2.1.0 - Production on Thu Jan 10 15:43:39 PST 2013
Copyright (c) 2007, 2009, Oracle. All rights reserved.
Cell Efficiency Ratio: 17M
CellCLI> list cell detail
name: cell
bmcType: absent
cellVersion: OSS_MAIN_LINUX_101005
cpuCount: 1
fanCount: 1/1
fanStatus: normal
id: 3c4bc34b-9acd-4a98-a910-c660c9c76c03
interconnectCount: 1
interconnect1: eth0
iormBoost: 0.0
ipaddress1: 192.168.10.101/24
kernelVersion: 2.6.18-194.el5
makeModel: Fake hardware
metricHistoryDays: 7
offloadEfficiency: 17.3M
powerCount: 1/1
powerStatus: normal
status: online
temperatureReading: 0.0
temperatureStatus: normal
upTime: 3 days, 6:22
cellsrvStatus: running
msStatus: running
rsStatus: running
CellCLI: Release 11.2.2.1.0 - Production on Thu Jan 10 15:43:39 PST 2013
Copyright (c) 2007, 2009, Oracle. All rights reserved.
Cell Efficiency Ratio: 17M
CellCLI> list cell detail
name: cell
bmcType: absent
cellVersion: OSS_MAIN_LINUX_101005
cpuCount: 1
fanCount: 1/1
fanStatus: normal
id: 3c4bc34b-9acd-4a98-a910-c660c9c76c03
interconnectCount: 1
interconnect1: eth0
iormBoost: 0.0
ipaddress1: 192.168.10.101/24
kernelVersion: 2.6.18-194.el5
makeModel: Fake hardware
metricHistoryDays: 7
offloadEfficiency: 17.3M
powerCount: 1/1
powerStatus: normal
status: online
temperatureReading: 0.0
temperatureStatus: normal
upTime: 3 days, 6:22
cellsrvStatus: running
msStatus: running
rsStatus: running
CellCLI> list physicaldisk
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 normal
CellCLI> list lun
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 normal
CellCLI> list lun where disktype=harddisk
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 normal
CellCLI> list lun where name like '.*disk01' detail
name: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
cellDisk: CD_disk01_cell
deviceName: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
diskType: HardDisk
id: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
isSystemLun: FALSE
lunAutoCreate: FALSE
lunSize: 500M
physicalDrives: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
raidLevel: "RAID 0"
status: normal
CellCLI> list celldisk CD_disk01_cell detail
name: CD_disk01_cell
comment:
creationTime: 2012-09-04T18:12:55-07:00
deviceName: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
devicePartition: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
diskType: HardDisk
errorCount: 0
freeSpace: 0
id: 00000139-93fc-a83d-0000-000000000000
interleaving: none
lun: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
raidLevel: "RAID 0"
size: 496M
status: normal
CellCLI> list griddisk where celldisk=CD_disk01_cell detail
name: data2_CD_disk01_cell
availableTo:
cellDisk: CD_disk01_cell
comment:
creationTime: 2012-09-04T23:46:14-07:00
diskType: HardDisk
errorCount: 0
id: 00000139-952d-dd5d-0000-000000000000
offset: 48M
size: 448M
status: active
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 normal
CellCLI> list lun
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/FLASH04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 normal
CellCLI> list lun where disktype=harddisk
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk02 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk03 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk04 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk05 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk06 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk07 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk08 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk09 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk10 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk11 normal
/opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk12 normal
CellCLI> list lun where name like '.*disk01' detail
name: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
cellDisk: CD_disk01_cell
deviceName: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
diskType: HardDisk
id: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
isSystemLun: FALSE
lunAutoCreate: FALSE
lunSize: 500M
physicalDrives: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
raidLevel: "RAID 0"
status: normal
CellCLI> list celldisk CD_disk01_cell detail
name: CD_disk01_cell
comment:
creationTime: 2012-09-04T18:12:55-07:00
deviceName: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
devicePartition: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
diskType: HardDisk
errorCount: 0
freeSpace: 0
id: 00000139-93fc-a83d-0000-000000000000
interleaving: none
lun: /opt/oracle/cell11.2.2.1.0_LINUX_101005/disks/raw/disk01
raidLevel: "RAID 0"
size: 496M
status: normal
CellCLI> list griddisk where celldisk=CD_disk01_cell detail
name: data2_CD_disk01_cell
availableTo:
cellDisk: CD_disk01_cell
comment:
creationTime: 2012-09-04T23:46:14-07:00
diskType: HardDisk
errorCount: 0
id: 00000139-952d-dd5d-0000-000000000000
offset: 48M
size: 448M
status: active
SQL> select name, path, state, total_mb from v$asm_disk
2 where name like '%DATA2_CD_DISK01_CELL%';
NAME PATH STATE TOTAL_MB
------------------------------ ---------------------------------------------------------------- -------- ----------
DATA2_CD_DISK01_CELL o/192.168.10.101/data2_CD_disk01_cell NORMAL 448
2 where name like '%DATA2_CD_DISK01_CELL%';
NAME PATH STATE TOTAL_MB
------------------------------ ---------------------------------------------------------------- -------- ----------
DATA2_CD_DISK01_CELL o/192.168.10.101/data2_CD_disk01_cell NORMAL 448
SQL> select a.name disk, b.name diskgroup
2 from v$asm_disk a, v$asm_diskgroup b
3 where b.group_number = a.group_number
4 and a.name like '%DATA2_CD_DISK01_CELL%';
DISK DISKGROUP
------------------------------ ------------------------------
DATA2_CD_DISK01_CELL DATA
2 from v$asm_disk a, v$asm_diskgroup b
3 where b.group_number = a.group_number
4 and a.name like '%DATA2_CD_DISK01_CELL%';
DISK DISKGROUP
------------------------------ ------------------------------
DATA2_CD_DISK01_CELL DATA
CELLSRV, MS, RS
CELLSRV: Is a multithreaded server. The primary Exadata Storage software component and provides the majority of Exadata storage services. It communicates with Oracle Database to serve simple block requests like database buffer cache reads and Sart Scan requests like tablespaces with projections and filters. CELLSRV also implements I/ORM and collect statistics relating to its operation.
MS: provides Exadata cell management and configuration. It works with CellCLI. It also responsible for sending alerts and collects some statistics in addition to those collected by CELLSRV.
Restart Server (RS) is used to startup / shutdown the Cell Server (CELLSRV) and Management server (MS). It also monitors these services to check whether they need to be restarted.
Configure RMAN to automatically purge archive logs after applying on the standby database
RMAN can automatically purge archivelogs from the FRA after archivelogs are applied to the standby database.
rman target /
RMAN > configure archivelog deletion poliyc to applied on all standby;
select a.thread#, a.sequence#, a.applied
from v$archived_log a, v$database d
where a.activation# = d.activation#
and a.applied='YES'
order by a.thread#
RMAN > show retention policy
RMAN > report obsolete;
When there is a space pressure in FRA, the archivelogs will be deleted and you can see it from alert_INSTANCE.log
Create Flash devices and Flash-based disk group
Exam your current flash-based cell disks
CellCLI> list celldisk attributes name, freeSpace where diskType=FlashDisk;
FD_00_cell 500M
FD_01_cell 500M
FD_02_cell 500M
FD_03_cell 500M
Create Smart Flash Cache with 1G
CellCLI> create flashcache all size=1024m;
Flash cache cell_FLASHCACHE successfully created
So, the available free space on all flash-based cell disks are:
CellCLI> list celldisk attributes name, freeSpace where diskType=FlashDisk;
FD_00_cell 192M
FD_01_cell 192M
FD_02_cell 192M
FD_03_cell 192M
CellCLI> create griddisk all flashdisk prefix=flash
GridDisk flash_FD_00_cell successfully created
GridDisk flash_FD_01_cell successfully created
GridDisk flash_FD_02_cell successfully created
GridDisk flash_FD_03_cell successfully created
Flash grid disks are created across the cells
CellCLI> list griddisk attributes name, size, ASMModeStatus where disktype=flashdisk;
flash_FD_00_cell 192M UNUSED
flash_FD_01_cell 192M UNUSED
flash_FD_02_cell 192M UNUSED
flash_FD_03_cell 192M UNUSED
SQL > select path, header_status from v$asm_disk
2* where path like '%flash%'
PATH HEADER_STATU
---------------------------------------------------------------- ------------
o/192.168.10.101/flash_FD_02_cell CANDIDATE
o/192.168.10.101/flash_FD_03_cell CANDIDATE
o/192.168.10.101/flash_FD_01_cell CANDIDATE
o/192.168.10.101/flash_FD_00_cell CANDIDATE
Create grid disks and ASM diskgroup
Prepare
grid disks- Create set of griddisks on all available hard disks space cell
disks
CellCLI>
list griddisk
reco_CD_disk07_cell active
reco_CD_disk08_cell active
reco_CD_disk09_cell active
reco_CD_disk10_cell active
reco_CD_disk11_cell active
reco_CD_disk12_cell active
CellCLI>
create griddisk all harddisk prefix=data2, size=100m;
Cell
disks were skipped because they had no freespace for grid disks:
CD_disk07_cell, CD_disk08_cell, CD_disk09_cell, CD_disk10_cell, CD_disk11_cell,
CD_disk12_cell.
GridDisk
data2_CD_disk01_cell successfully created
GridDisk
data2_CD_disk02_cell successfully created
GridDisk
data2_CD_disk03_cell successfully created
GridDisk
data2_CD_disk04_cell successfully created
GridDisk
data2_CD_disk05_cell successfully created
GridDisk
data2_CD_disk06_cell successfully created
CellCLI>
list griddisk;
data2_CD_disk01_cell active
data2_CD_disk02_cell active
data2_CD_disk03_cell active
data2_CD_disk04_cell active
data2_CD_disk05_cell active
data2_CD_disk06_cell active
reco_CD_disk07_cell active
reco_CD_disk08_cell active
reco_CD_disk09_cell active
reco_CD_disk10_cell active
reco_CD_disk11_cell active
reco_CD_disk12_cell active
CellCLI>
list griddisk attributes name, size, ASMModeStatus
data2_CD_disk01_cell 96M
UNUSED
data2_CD_disk02_cell 96M
UNUSED
data2_CD_disk03_cell 96M
UNUSED
data2_CD_disk04_cell 96M
UNUSED
data2_CD_disk05_cell 96M
UNUSED
data2_CD_disk06_cell 96M
UNUSED
reco_CD_disk07_cell 448M
ONLINE
reco_CD_disk08_cell 448M
ONLINE
reco_CD_disk09_cell 448M
ONLINE
reco_CD_disk10_cell 448M
ONLINE
reco_CD_disk11_cell 448M
ONLINE
reco_CD_disk12_cell 448M
ONLINE
Create
ASM diskgroup with AU_SIZE=4M as recommended value
SQL>
create diskgroup DATA external redundancy disk 'o/*/data2*' attribute
'compatible.rdbms' = '11.2.0.0.0', 'compatible.asm' = '11.2.0.0.0',
'cell.smart_scan_capable' = 'TRUE', 'au_size' = '4M';
Diskgroup
created.
SQL>
select name, state from v$asm_diskgroup;
NAME STATE
--------------------------------
-----------
DATA MOUNTED
FRA MOUNTED
CellCLI>
list griddisk attributes name, size, ASMModeStatus
data2_CD_disk01_cell 96M
ONLINE
data2_CD_disk02_cell 96M
ONLINE
data2_CD_disk03_cell 96M
ONLINE
data2_CD_disk04_cell 96M
ONLINE
data2_CD_disk05_cell 96M
ONLINE
data2_CD_disk06_cell 96M
ONLINE
reco_CD_disk07_cell 448M
ONLINE
reco_CD_disk08_cell 448M
ONLINE
reco_CD_disk09_cell 448M
ONLINE
reco_CD_disk10_cell 448M
ONLINE
reco_CD_disk11_cell 448M
ONLINE
reco_CD_disk12_cell 448M
ONLINE
Exadata configuration tasks cell and storage provisioning
To run the performance test on the cell, you use Calibrate:
CellCLI>
alter cell shutdown services cellsrv
Stopping
CELLSRV services...
The
SHUTDOWN of CELLSRV services was successful.
CellCLI>
calibrate;
Calibration
will take a few minutes...
Aggregate
random read throughput across all hard disk luns: 139 MBPS
Aggregate
random read throughput across all flash disk luns: 2720.05 MBPS
Aggregate
random read IOs per second (IOPS) across all hard disk luns: 1052
Aggregate
random read IOs per second (IOPS) across all flash disk luns: 143248
Controller
read throughput: 5477.08 MBPS
Calibrating
hard disks (read only) ...
Calibrating
flash disks (read only, note that writes will be significantly slower) ...
…….
CALIBRATE
stress test is now running...
Calibration
has finished.
CellCLI>
alter cell validate configuration;
Cell
cell successfully altered
CellCLI>
list celldisk where freespace !=0
CD_disk01
normal
CellCLI>
create griddisk all harddisk prefix=datagri
CellCLI>
drop celldisk CD_disk01_cell;
CellDisk
CD_disk01_cell successfully dropped
CellCLI>
list celldisk
CellCLI>
create celldisk all;
CellDisk
CD_disk01 successfully created
CellCLI>
list griddisk attributes name where asmdeactivationoutcome !='Yes'
CellCLI>
alter griddisk all inactive
CellCLI>
list griddisk attributes name where asmdeactivationoutcome !='Yes'
CellCLI>
alter griddisk all inactive
To
power off Exadata Storage Servers
shutdown
-h -y now
CellCLI>
list griddisk
CellCLI>
list griddisk attributes name, asmmodestatus
Cellcli
> alter grid disk all active
CellCLI>
list griddisk
CellCLI>
alter griddisk all active
CellCLI>
list griddisk attributes name, asmmodestatus
CellCLI>
list griddisk attributes name, asmmodestatus
CellCLI>
list griddisk attributes name, asmmodestatus
CellCLI>
list griddisk attributes name, asmmodestatus
CellCLI>
list griddisk
Subscribe to:
Posts (Atom)