1. 程式人生 > >oracle 11g RAC ASM磁碟被強制下線搶修一例

oracle 11g RAC ASM磁碟被強制下線搶修一例

又有一段時間沒寫有關oracle的文章了,恐怕這也是不能成為高手的原因之一,偶有典型性拖延症和懶癌。
今天要講的一個栗子又一個現場case,中午吃飯的時候看到來自同事的未接,處於職業敏感感覺是資料中心出問題了,撥過去第一通電話告訴我儲存壞掉一塊盤,然後NBU有個備份作業沒完成,感覺問題不大通知廠商維修便可以了。剛吃完飯,第二通電話進來了,這次告知我又一個雙活控制器儲存報警,同時說資料庫RAC叢集看不到狀態了。心中暗暗祈禱不要被近期“比特幣勒索”事件波及,上週二剛剛通知各位廠家技術人員排查過,可別害我啊,這玩意兒我真修復不了。
原本打算中午休息,忍了趕快處理吧,下午還有3節c#的實驗課呢,週末不太平啊。趕到現場,如我猜想的一樣,同事趕著中午在巡檢,詢問最近的巡檢情況,同事告知我週六沒有做巡檢,週一到週五都做了,沒發現問題。開啟終端連到db server上,果然是有問題:

[12:59:29][root: ~]#ckrac   自己寫個一個小指令碼
[12:59:31]CRS-4535: Cannot communicate with Cluster Ready Services
[12:59:31]CRS-4000: Command Status failed, or completed with errors.

首先第一印象判斷:叢集垮了
馬上檢查資料庫

su - oracle
sqlplus / as sysdba
select * from v$instance;

還好,例項還活著,資料算是基本上不會丟了。
嘗試啟動叢集

13:07:48][root: /u01/app/11.2.0/grid/bin]#./crsctl start
cluster [13:07:48]CRS-2672: Attempting to start 'ora.crsd' on 'rac01' [13:07:49]CRS-2676: Start of 'ora.crsd' on 'rac01' succeeded

接下就是檢查日誌了,先檢查節點例項的日誌,沒看出問題

[13:13:33][root: /u01/app/11.2.0/grid/bin]# cd /u01/app/oracle/diag/rdbms/gbf1/GBF11/trace
[13:13:52][root: /u01/app/oracle/diag/rdbms/gbf1/GBF11/trace]#tail -500  alert_GBF11.log |more

有點懵了,確定是叢集的問題,這會是什麼問題呢?
檢視叢集日誌:沒做功課和筆記下場,只有find一下了

[13:19:33][root: ~]#find /u01 -name 'crsd.log'
[13:19:45]/u01/app/11.2.0/grid/log/rac01/crsd/crsd.log

[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: COMMCRS  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: COMMNS  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: CSSCLNT  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCLIB  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCXBAD  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCLXPT  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCUNDE  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPC  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCGEN  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCTRAC  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCWAIT  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCXCPT  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCOSD  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCBASE  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCCLSA  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCCLSC  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCEXMP  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCGMOD  0
[13:20:42]2016-11-20 13:23:50.303: [    CRSD][3134007024] Logging level for Module: GIPCHEAD  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCMUX  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCNET  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCNULL  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCPKT  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCSMEM  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCHAUP  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCHALO  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCHTHR  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCHGEN  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCHLCK  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCHDEM  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: GIPCHWRK  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSMAIN  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: clsdmt  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: clsdms  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSUI  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSCOMM  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSRTI  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSPLACE  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSAPP  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSRES  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSTIMER  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSEVT  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSD  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CLUCLS  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CLSVER  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CLSFRAME  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSPE  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSSE  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSRPT  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSOCR  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: UiServer  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: AGFW  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: SuiteTes  1
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSSHARE  1
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSSEC  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSCCL  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: CRSCEVT  0
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: AGENT  1
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: OCRAPI  1
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: OCRCLI  1
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: OCRSRV  1
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: OCRMAS  1
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: OCRMSG  1
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: OCRCAC  1
[13:20:42]2016-11-20 13:23:50.304: [    CRSD][3134007024] Logging level for Module: OCRRAW  1
[13:20:42]2016-11-20 13:23:50.305: [    CRSD][3134007024] Logging level for Module: OCRUTL  1
[13:20:42]2016-11-20 13:23:50.305: [    CRSD][3134007024] Logging level for Module: OCROSD  1
[13:20:42]2016-11-20 13:23:50.305: [    CRSD][3134007024] Logging level for Module: OCRASM  1
[13:20:42]2016-11-20 13:23:50.305: [ CRSMAIN][3134007024] Checking the OCR device
[13:20:42]2016-11-20 13:23:50.305: [ CRSMAIN][3134007024] Sync-up with OCR
[13:20:42]2016-11-20 13:23:50.305: [ CRSMAIN][3134007024] Connecting to the CSS Daemon
[13:20:42]2016-11-20 13:23:50.305: [ CRSMAIN][3134007024] Getting local node number
[13:20:42]2016-11-20 13:23:50.306: [ CRSMAIN][3134007024] Initializing OCR
[13:20:42]2016-11-20 13:23:50.307: [ CRSMAIN][3127560512] Policy Engine is not initialized yet!
[13:20:42][   CLWAL][3134007024]clsw_Initialize: OLR initlevel [70000]
[13:20:42]2016-11-20 13:23:50.622: [  OCRASM]
****[3134007024]proprasmo: Error in open/create file in dg [OCRVDISK]**** 
[13:20:42][  OCRASM][3134007024]SLOS : SLOS: cat=8, opn=kgfoOpen01, dep=15056, loc=kgfokge
[13:20:42]
[13:20:42]2016-11-20 13:23:50.623: [  OCRASM][3134007024]ASM Error Stack : 
[13:20:42]2016-11-20 13:23:50.667: [  OCRASM][3134007024]proprasmo: kgfoCheckMount returned [6]
[13:20:42]2016-11-20 13:23:50.667: [  OCRASM][3134007024]proprasmo: The ASM disk group OCRVDISK is not found or not mounted
[13:20:42]2016-11-20 13:23:50.667: [  OCRRAW][3134007024]proprioo: Failed to open [+OCRVDISK]. Returned proprasmo() with [26]. Marking location as UNAVAILABLE.
[13:20:42]2016-11-20 13:23:50.667: [  OCRRAW][3134007024]proprioo: No OCR/OLR devices are usable
[13:20:42]2016-11-20 13:23:50.667: [  OCRASM][3134007024]proprasmcl: asmhandle is NULL
[13:20:42]2016-11-20 13:23:50.668: [    GIPC][3134007024] gipcCheckInitialization: possible incompatible non-threaded init from [prom.c : 690], original from [clsss.c : 5343]
[13:20:42]2016-11-20 13:23:50.668: [ default][3134007024]clsvactversion:4: Retrieving Active Version from local storage.
[13:20:42]2016-11-20 13:23:50.670: [ CSSCLNT][3134007024]clssgsgrppubdata: group (ocr_rac-cluster01) not found
[13:20:42]
[13:20:42]2016-11-20 13:23:50.670: [  OCRRAW][3134007024]proprio_repairconf: Failed to retrieve the group public data. CSS ret code [20]
[13:20:42]2016-11-20 13:23:50.672: [  OCRRAW][3134007024]proprioo: Failed to auto repair the OCR configuration.
[13:20:42]2016-11-20 13:23:50.672: [  OCRRAW][3134007024]proprinit: Could not open raw device 
[13:20:42]2016-11-20 13:23:50.672: [  OCRASM][3134007024]proprasmcl: asmhandle is NULL
[13:20:42]2016-11-20 13:23:50.675: [  OCRAPI][3134007024]a_init:16!: Backend init unsuccessful : [26]
[13:20:42]2016-11-20 13:23:50.675: [  CRSOCR][3134007024] OCR context init failure.  Error: PROC-26: Error while accessing the physical storage
[13:20:42]
[13:20:42]2016-11-20 13:23:50.675: [    CRSD][3134007024] Created alert : (:CRSD00111:) :  Could not init OCR, error: PROC-26: Error while accessing the physical storage
[13:20:42]
[13:20:42]2016-11-20 13:23:50.675: [    CRSD][3134007024][PANIC] CRSD exiting: Could not init OCR, code: 26
[13:20:42]2016-11-20 13:23:50.675: [    CRSD][3134007024] Done.

可以看出[+OCRVDISK]. 這個盤有問題

su - grid
sqlplus / as sysasm
SQL> select GROUP_NUMBER,NAME,TYPE,ALLOCATION_UNIT_SIZE,STATE from v$asm_diskgroup;

GROUP_NUMBER NAME  TYPE         ALLOCATION_UNIT_SIZE STATETYPE         ALLOCATION_UNIT_SIZE STATE
------------ ------------------------------------------------------------------------ -------------------- ----------------------
           1 BAK01  EXTERN                    4194304 MOUNTED
           2 DATA01 EXTERN                    4194304 MOUNTED
           3 GBF1_ARC EXTERN                    4194304 MOUNTED
           4 GBF2_ARC EXTERN                    4194304 MOUNTED
           0 OCRVDISK                               0 DISMOUNTED
           6 YWDB_ARC EXTERN                    4194304 MOUNTED
6 rows selected.
alter diskgroup OCRVDISK mount;

上面的日誌進一步印證該問題,這個盤絕對有問題!
檢視ASM日誌

cd /u01/app/grid/diag/asm/+asm/+ASM1/trace
#tail -100 alert_+ASM1.log 
[13:38:07]Thu Nov 17 19:13:56 2016
[13:38:07]WARNING: dirty detached from domain 5
[13:38:07]NOTE: cache dismounted group 5/0xAC0A8092 (OCRVDISK) 
[13:38:07]SQL> alter diskgroup OCRVDISK dismount force /* ASM SERVER:2886369426 */   <-- 看這裡
[13:38:07]Thu Nov 17 19:13:56 2016
[13:38:07]NOTE: cache deleting context for group OCRVDISK 5/0xac0a8092
[13:38:07]GMON dismounting group 5 at 21 for pid 28, osid 20749
[13:38:07]NOTE: Disk OCRVDISK_0000 in mode 0x7f marked for de-assignment
[13:38:07]NOTE: Disk OCRVDISK_0001 in mode 0x7f marked for de-assignment
[13:38:07]NOTE: Disk OCRVDISK_0002 in mode 0x7f marked for de-assignment
[13:38:07]SUCCESS: diskgroup OCRVDISK was dismounted
[13:38:07]SUCCESS: alter diskgroup OCRVDISK dismount force /* ASM SERVER:2886369426 */
[13:38:07]SUCCESS: ASM-initiated MANDATORY DISMOUNT of group OCRVDISK
[13:38:07]Thu Nov 17 19:13:56 2016
[13:38:07]NOTE: diskgroup resource ora.OCRVDISK.dg is offline
[13:38:07]ASM Health Checker found 1 new failures
[13:38:07]Thu Nov 17 19:13:56 2016
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Thu Nov 17 20:57:21 2016
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]WARNING: requested mirror side 1 of virtual extent 6 logical extent 0 offset 462848 is not allocated; I/O request failed
[13:38:07]WARNING: requested mirror side 2 of virtual extent 6 logical extent 1 offset 462848 is not allocated; I/O request failed
[13:38:07]Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc:
<-- 通過此檔案進一步追蹤
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]ORA-15078: ASM diskgroup was forcibly dismounted
[13:38:07]Thu Nov 17 20:57:21 2016
[13:38:07]SQL> alter diskgroup OCRVDISK check /* proxy */ 
[13:38:07]ORA-15032: not all alterations performed
[13:38:07]ORA-15001: diskgroup "OCRVDISK" does not exist or is not mounted
[13:38:07]ERROR: alter diskgroup OCRVDISK check /* proxy */
[13:38:07]Thu Nov 17 20:57:37 2016
[13:38:07]NOTE: client exited [27134]
[13:38:07]Thu Nov 17 20:57:38 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1688] opening OCR file
[13:38:07]Thu Nov 17 20:57:40 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1707] opening OCR file
[13:38:07]Thu Nov 17 20:57:42 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1726] opening OCR file
[13:38:07]Thu Nov 17 20:57:44 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1749] opening OCR file
[13:38:07]Thu Nov 17 20:57:46 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1772] opening OCR file
[13:38:07]Thu Nov 17 20:57:48 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1791] opening OCR file
[13:38:07]Thu Nov 17 20:57:50 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1814] opening OCR file
[13:38:07]Thu Nov 17 20:57:52 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1833] opening OCR file
[13:38:07]Thu Nov 17 20:57:54 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1852] opening OCR file
[13:38:07]Thu Nov 17 20:57:57 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 1872] opening OCR file
[13:38:07]Sun Nov 20 13:23:30 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28546] opening OCR file
[13:38:07]Sun Nov 20 13:23:32 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28566] opening OCR file
[13:38:07]Sun Nov 20 13:23:34 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28585] opening OCR file
[13:38:07]Sun Nov 20 13:23:36 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28607] opening OCR file
[13:38:07]Sun Nov 20 13:23:38 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28627] opening OCR file
[13:38:07]Sun Nov 20 13:23:40 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28651] opening OCR file
[13:38:07]Sun Nov 20 13:23:42 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28678] opening OCR file
[13:38:07]Sun Nov 20 13:23:44 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28700] opening OCR file
[13:38:07]Sun Nov 20 13:23:46 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28723] opening OCR file
[13:38:07]Sun Nov 20 13:23:48 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28742] opening OCR file
[13:38:07]Sun Nov 20 13:23:50 2016
[13:38:07]NOTE: [crsd.bin@rac01 (TNS V1-V3) 28761] opening OCR file
[13:38:07]Sun Nov 20 13:25:14 2016
[13:38:07]NOTE: [ocrcheck.bin@rac01 (TNS V1-V3) 28960] opening OCR file
[13:38:07]Sun Nov 20 13:38:53 2016
[13:38:07]NOTE: No asm libraries found in the system
[13:38:07]MEMORY_TARGET defaulting to 1128267776.
[13:38:07]* instance_number obtained from CSS = 1, checking for the existence of node 0... 
[13:38:07]* node 0 does not exist. instance_number = 1 
[13:38:07]Starting ORACLE instance (normal)

進一步追蹤ASM1_ora_27147.trc檔案

#tail -500  +ASM1_ora_27147.trc
[13:39:54]Trace file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_27147.trc
[13:39:54]Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
[13:39:54]With the Real Application Clusters and Automatic Storage Management options
[13:39:54]ORACLE_HOME = /u01/app/11.2.0/grid
[13:39:54]System name:    Linux
[13:39:54]Node name:      rac01
[13:39:54]Release:        2.6.39-300.26.1.el5uek
[13:39:54]Version:        #1 SMP Thu Jan 3 18:31:38 PST 2013
[13:39:54]Machine:        x86_64
[13:39:54]VM name:        VMWare Version: 6
[13:39:54]Instance name: +ASM1
[13:39:54]Redo thread mounted by this instance: 0 <none>
[13:39:54]Oracle process number: 24
[13:39:54]Unix process pid: 27147, image: [email protected] (TNS V1-V3)
[13:39:54]
[13:39:54]
[13:39:54]*** 2016-11-17 19:13:56.353
[13:39:54]*** SESSION ID:(3.3) 2016-11-17 19:13:56.353
[13:39:54]*** CLIENT ID:() 2016-11-17 19:13:56.353
[13:39:54]*** SERVICE NAME:() 2016-11-17 19:13:56.353
[13:39:54]*** MODULE NAME:([email protected] (TNS V1-V3)) 2016-11-17 19:13:56.353
[13:39:54]*** ACTION NAME:() 2016-11-17 19:13:56.353
[13:39:54] 
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]
[13:39:54]*** 2016-11-17 19:14:18.676
[13:39:54]Received ORADEBUG command (#1) 'CLEANUP_KFK_FD' from process 'Unix process pid: 20811, image: <none>'
[13:39:54]
[13:39:54]*** 2016-11-17 19:14:18.697
[13:39:54]Finished processing ORADEBUG command (#1) 'CLEANUP_KFK_FD'
[13:39:54]
[13:39:54]*** 2016-11-17 20:57:21.852
[13:39:54]WARNING:failed xlate 1 
[13:39:54]ORA-15078: ASM diskgroup was forcibly dismounted
[13:39:54]ksfdrfms:Mirror Read file=+OCRVDISK.255.4294967295 fob=0x9d804848 bufp=0x7f0543d2fa00 blkno=1649 nbytes=4096
[13:39:54]WARNING:failed xlate 1 
[13:39:54]WARNING: requested mirror side 1 of virtual extent 6 logical extent 0 offset 462848 is not allocated; I/O request failed
[13:39:54]ksfdrfms:Read failed from mirror side=1 logical extent number=0 dskno=
            
           

相關推薦

oracle 11g RAC ASM磁碟強制下線搶修

又有一段時間沒寫有關oracle的文章了,恐怕這也是不能成為高手的原因之一,偶有典型性拖延症和懶癌。 今天要講的一個栗子又一個現場case,中午吃飯的時候看到來自同事的未接,處於職業敏感感覺是資料中心出問題了,撥過去第一通電話告訴我儲存壞掉一塊盤,然後NBU有

ORACLE 11G RAC ASM磁盤組全部丟失後的恢復

實例 ice mat dns 禁用 buffers bit allocated event 一、環境描述(1)Oracle 11.2.0.3 RAC ON Oracle Linux 6 x86_64,只有一個ASM外部冗余磁盤組——DATA;(2)OCR,VOTEDISK,

oracle 11g rac dbca建庫時提示創建監聽

oracle 監聽 listener oracle rac Oracle 11g rac dbca建庫時提示創建監聽在安裝oracle 11g rac時,使用dbca建庫的過程中提示需要創建監聽:Default Listener "LISTENER" is not configured in

Oracle 12cR2 RAC+ASM安裝

oracle rac一、準備工作1、關於Oracle Gird Infrastructure的一些變化從Oracle Grid Infrastructure 12c第2版(12.2)開始,Oracle Grid Infrastructure軟件可用作下載和安裝的映像文件。此功能大大簡化了Oracle Grid

oracle 11g rac 筆記(VMware 和esxi主機都可以使用)

oracle 11g rac這個只是筆記,防止丟失,沒事見整理在vmware安裝目錄 創建磁盤:vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\ocr.vmdkvmware-vdiskmanager.ex

Oracle 11g單實RMAN恢復到Oracle 11g RAC

oracle 遷移 oracle rac 一、環境說明操作系統版本: RHEL 6.5 x641. 源數據庫服務器Oracle版本: Oracle 11g 11.2.0.4 64位(單機)Oracle_SID: orcl db_name : orcl背景:一臺生產oracle10g(10.2

oracle 11g rac 修改字符集

can data 64bit 復數 查看字符集 str edit root sql 系統版本:Oracle Linux Server release 5.7數據庫版本:Oracle Database 11g Enterprise Edition Release 11.2.0

oracle 11g 使用ASM存儲遷移

mode 修改配置 form power rhel 6 work sta sea 環境 一、環境描述 rhel 6.6 + Oracle 11.2.0.4 存儲更換,需要添加新盤,替換掉舊的存儲盤,以下為測試步驟。 二、測試過程 [root@roidb1 ~]# cd /

oracle 11g RAC crfclust.bdb過大的處理

oracle ora.crf過大find / -type f -size +500M | xargs du -hm | sort -nrora.crf服務是為Cluster Health Monitor(以下簡稱CHM)提供服務的,用來自動收集操作系統的資源(CPU、內存、SWAP、進程、I/O以及網絡等

oracle 11g rac 監聽無法啟動

right dom ice 文件 gen roo ssa disk inux 1.數據庫啟動集群報錯 [root@db1 bin]# ./crs_stat -t -v Name Type R/RA F/FT Targe

oracle 11g RAC 的基本操作()------啟動與關閉

執行 同時 man sources monit vip nag 查看數據庫 resource 啟動RAC 手工啟動按照HAS, cluster, database的順序啟動,具體命令如下: 啟動HAS(High Availability Servi

轉載:細說oracle 11g rac 的ip地址

捕獲 ted 失效 服務 修改 機器 發生 操作 自己 本文轉載自:細說oracle 11g rac 的ip地址 http://blog.sina.com.cn/s/blog_4fe6d4250102v5fa.html 以前搭建oracle rac的時候(自己摸索搭建),對

Oracle 11g rac新增刪除叢集資料庫

部落格文章除註明轉載外,均為原創。轉載請註明出處。本文連結地址:http://blog.chinaunix.net/uid-31396856-id-5790357.html好記性不如爛筆頭,     記錄新增叢集資料庫和刪除叢集資料庫的關鍵步驟:主要是通過srvctl命令來管理叢集的

使用 export timeout = -1來免除ssh時間過長強制下線的困擾

長時間連線ssh沒有操作,可能會被強制下線,這時候,我們使用以下命令就可以免除次困擾: export timeout = -1,便不再會被強制下線了。 有的人寫攻略說要寫入conf配置檔案裡,這樣確實不用每次登入後都敲一遍命令,但是實際企業環境中不可能讓我們隨意的去修改conf檔案,或者被防

Oracle 11g RAC的體系結構與啟動順序

參考:https://blog.csdn.net/zhang123456456/article/details/53872060  CSSD(心跳):      ASM SPFILE(不是通過ASM例項,通過ASM驅動直接從磁碟讀取。普通ASM檔案) -&

Redhat 6.1 配置Linux multipath安裝oracle 11g rac

一、安裝配置儲存節點(略) 二、資料庫節點連線儲存節點 2.1、資料庫節點安裝ISCSI啟動器 yum install iscsi* 2.2、配置啟動器 vim /etc/iscsi/initiatorname.iscsi 2.3、發起連線 # iscsiadm

oracle 11g ADG 由於磁碟空間不足導致同步問題

應用軟體廠商反映adg 備庫端資料已經好幾天沒有同步了,問題檢視 發現adg備庫沒有應用日誌程序: SQL>  select PROCESS,PID, STATUS ,CLIENT_PROCESS  from v$managed_standby; PROCESS  

[轉帖]Oracle 11G RAC For Windows 2008 R2部署手冊 Oracle 11G RAC For Windows 2008 R2部署手冊(親測,成功實施多次)

Oracle 11G RAC For Windows 2008 R2部署手冊(親測,成功實施多次)   https://www.cnblogs.com/yhfssp/p/7821593.html   總體規劃 伺服器規劃

Oracle 11G RAC資料庫基本測試和使用

檢查RAC狀態 主節點測試各個節點rac執行是否正常。顯示rac節點詳細資訊 $ srvctl config database -d rac Database unique name: rac Database name: rac Oracle home: /u

Oracle 11G RAC 生成AWR報告總結

1.生成單例項 AWR 報告: @$ORACLE_HOME/rdbms/admin/awrrpt.sql 2.生成 Oracle RAC AWR 報告: @$ORACLE_HOME/rdbms/admin/awrgrpt.sql 3.生成 RAC 環境中特定資料庫