1. 程式人生 > >WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!的分析

WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!的分析

waited too long for a row cache enqueue lock!的分析

今天我的數據庫hang住了,查看告警日誌提示

WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=31

AIX 5.3 是一個RAC環境,10.2.0.4

由於問題一致,我這邊沒有把生產環境中日誌拿出來,但是我參考一個博文的,具體如下:

1.山西電信告警alert_mega.log

Mon Jun 01 02:50:03 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=31
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_27190.trc

Tue Jun 02 02:50:03 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=32
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_2063.trc
Tue Jun 02 17:53:40 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=37
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_7006.trc
Tue Jun 02 18:00:41 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=38
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_7120.trc
Wed Jun 03 02:50:02 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=39
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_9632.trc
Wed Jun 03 20:54:39 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=41
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_15023.trc
Wed Jun 03 20:55:07 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=42
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_15029.trc
Wed Jun 03 20:55:26 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=43
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_15033.trc
Thu Jun 04 02:50:02 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=44
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_16725.trc
Fri Jun 05 02:50:02 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=45
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_23781.trc
Sat Jun 06 02:50:03 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=46
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_30833.trc
Sat Jun 06 23:50:03 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=47
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_4951.trc
Sun Jun 07 02:50:02 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=48
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_5891.trc
Mon Jun 08 02:50:02 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=49
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_7520.trc
Tue Jun 09 02:50:02 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=50
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_14768.trc
Wed Jun 10 02:50:02 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=51
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_23975.trc
Wed Jun 10 22:24:05 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=52
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_29755.trc
Wed Jun 10 22:25:03 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=53
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_29760.trc
Thu Jun 11 02:50:02 CST 2015
>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=54
System State dumped to trace file /opt/oracle/admin/dba/udump/mega_ora_31040.trc

2.

在TRACE文件中包含一個SYSTEMSTATE DUMP ,首先通過ASS.awk來分析一下systemstate dump:



Resource Holder State
Enqueue PR-00000000-00000000 41: last wait for ‘os thread startup‘
Latch c0000000c2df3b70 ??? Blocker
Rcache object=c000000f9fdf8160, 61: waiting for ‘log file switch (checkpoint incomplete)‘
Enqueue US-0000004C-00000000 185: waiting for ‘log file switch (checkpoint incomplete)‘
Enqueue FB-00000006-0186BDC8 187: waiting for ‘log file switch (checkpoint incomplete)‘
LOCK: handle=c000000f388db3d0 204: waiting for ‘log file switch (checkpoint incomplete)‘


Object Names
~~~~~~~~~~~~
Enqueue PR-00000000-00000000
Latch c0000000c2df3b70 holding (efd=5) c0000000c2df3b70 slave cl
Rcache object=c000000f9fdf8160,
Enqueue US-0000004C-00000000
Enqueue FB-00000006-0186BDC8
LOCK: handle=c000000f388db3d0 TABL:REPORT.STATQ_AGENT_SUBS_NEW


看樣子ORAPID=41的會話可能是BLOCKER,從ass生成的報告來看,系統中存在大量的log file switch(checkpoint incomplete)等待。我們首先查看一下latch c0000000c2df3b70的持有者:


PROCESS 41:
----------------------------------------
SO: c000000fbe3c1910, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
(process) Oracle pid=41, calls cur/top: c000000f9d63cbd0/c000000fa2c63088, flag: (2) SYSTEM
int error: 0, call error: 0, sess error: 0, txn error 0
(post info) last post received: 0 0 33
last post received-location: ksrpublish
last process to post me: c000000fba3f0d80 1 6
last post sent: 0 0 24
last post sent-location: ksasnd
last process posted by me: c000000fb93cbc00 1 6
(latch info) wait_event=0 bits=1
holding (efd=5) c0000000c2df3b70 slave class create level=0
Location from where latch is held: ksvcreate:
Context saved from call: 0
state=busy, wlstate=free
waiters [orapid (seconds since: put on list, posted, alive check)]:
48 (61788, 1251079676, 3)
118 (61728, 1251079676, 0)
waiter count=2
Process Group: DEFAULT, pseudo proc: c000000fbc44e418
O/S info: user: oracle, term: UNKNOWN, ospid: 16929
OSD pid info: Unix process pid: 16929, image: [email protected] (MMON)


mmon在等待:


service name: SYS$BACKGROUND
last wait for ‘os thread startup‘ blocking sess=0x0000000000000000 seq=59956 wait_time=1006468 seconds since wait started=62472
=0, =0, =0
Dumping Session Wait History
for ‘os thread startup‘ count=1 wait_time=1006468
=0, =0, =0
for ‘os thread startup‘ count=1 wait_time=958707
=0, =0, =0
for ‘os thread startup‘ count=1 wait_time=1017506
=0, =0, =0
for ‘os thread startup‘ count=1 wait_time=975671
=0, =0, =0
for ‘os thread startup‘ count=1 wait_time=976846
=0, =0, =0
for ‘os thread startup‘ count=1 wait_time=984219
=0, =0, =0
for ‘os thread startup‘ count=1 wait_time=585799
=0, =0, =0
for ‘os thread startup‘ count=1 wait_time=1355858
=0, =0, =0
for ‘os thread startup‘ count=1 wait_time=979583
=0, =0, =0
for ‘os thread startup‘ count=1 wait_time=990744
=0, =0, =0
看樣子mmon在啟動其他進程的時候在等待啟動結束:


SO: c000000f9e6e3a20, type: 16, owner: c000000fbe3c1910, flag: INIT/-/-/0x00
(osp req holder)
CHILD REQUESTS:
(osp req) type=2(BACKGROUND) flags=0x20001(STATIC/-) state=8(DEAD) err=0
pg=129(default) arg1=0 arg2=(null) reply=(null) pname=m000
pid=0 parent=c000000f9e6e3a20 fulfill=0000000000000000
原來mmon在等待m000啟動.那麽mmon等待m000會導致整個系統HANG住嗎?我們需要進一步分析

3.

我們回過頭來在來看這個>>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=XXX


這個等待從臨晨4點多到宕機前一直存在。通過ass的結果,我們可以看到:


254:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
255:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
256:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
257:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
258:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
259:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
260:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
261:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
262:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
263:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
264:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
265:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
266:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
267:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
268:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
269:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
270:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
271:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
272:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
273:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
274:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
275:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
276:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
277:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
278:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
279:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
280:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait
281:waiting for ‘row cache lock‘ [Rcache object=c000000f9fdf8160,] wait


檢查一下process 254


service name: SYS$USERS
O/S info: user: report, term: , ospid: 17444, machine: qdbb2
program: [email protected] (TNS V1-V3)
application name: [email protected] (TNS V1-V3), hash value=516955253
waiting for ‘row cache lock‘ blocking sess=0x0000000000000000 seq=1 wait_time=0 seconds since wait started=3425
cache id=7, mode=0, request=3
Dumping Session Wait History
for ‘row cache lock‘ count=1 wait_time=2615950
cache id=7, mode=0, request=3
for ‘row cache lock‘ count=1 wait_time=2929664
cache id=7, mode=0, request=3
for ‘row cache lock‘ count=1 wait_time=2929674
cache id=7, mode=0, request=3
for ‘row cache lock‘ count=1 wait_time=2929666
cache id=7, mode=0, request=3


SO: c000000f9f9198b8, type: 50, owner: c000000ee30a3060, flag: INIT/-/-/0x00
row cache enqueue: count=1 session=c000000fba59b660 object=c000000f9fdf8160, request=S
savepoint=0x4
row cache parent object: address=c000000f9fdf8160 cid=7(dc_users)
hash=cdb2e447 typ=11 transaction=c000000faae8c710 flags=0000012a
own=c000000f9fdf8230[c000000f9f919a08,c000000f9f919a08] wat=c000000f9fdf8240[c000000f9f9198e8,c000000f9f918b08] mode=X


這是一個dc_users的對象,請求s方式的訪問,目前在等待一個mode=x的等待釋放,因此我們搜索c000000f9fdf8160, mode=X,找到:


SO: c000000f9f9199d8, type: 50, owner: c000000faae8c710, flag: INIT/-/-/0x00
row cache enqueue: count=1 session=c000000fba5f3b30 object=c000000f9fdf8160, mode=X
savepoint=0x6
row cache parent object: address=c000000f9fdf8160 cid=7(dc_users)
hash=cdb2e447 typ=11 transaction=c000000faae8c710 flags=0000012a
own=c000000f9fdf8230[c000000f9f919a08,c000000f9f919a08] wat=c000000f9fdf8240[c000000f9f9198e8,c000000f9f918b08] mode=X
status=VALID/UPDATE/-/-/IO/-/-/-/-
request=N release=TRUE flags=0
instance lock id=QH b6ae8f92 4cc6da8c
set=0, complete=FALSE
set=1, complete=FALSE
set=2, complete=FALSE
就是這個會話持有了這個dc_users的row cache object,而且持有方式是不兼容的,往前看,發現這個會話是:


PROCESS 61:
----------------------------------------
SO: c000000fbf3bfe48, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
(process) Oracle pid=61, calls cur/top: c000000fa0c0eb30/c000000f8b0fdef8, flag: (0) -
int error: 0, call error: 0, sess error: 0, txn error 0
(post info) last post received: 109 0 4
last post received-location: kslpsr
last process to post me: c000000fbc3c2d98 1 6
last post sent: 0 0 24
last post sent-location: ksasnd
last process posted by me: c000000fbc3c2d98 1 6
(latch info) wait_event=109 bits=0
Process Group: DEFAULT, pseudo proc: c000000fbc44e418
O/S info: user: oracle, term: UNKNOWN, ospid: 2006
OSD pid info: Unix process pid: 2006, image: [email protected]


SO: c000000fba5f3b30, type: 4, owner: c000000fbf3bfe48, flag: INIT/-/-/0x00
(session) sid: 1973 trans: c000000faae8c710, creator: c000000fbf3bfe48, flag: (41) USR/- BSY/-/-/-/-/-
DID: 0001-003D-041A6D36, short-term DID: 0000-0000-00000000
txn branch: 0000000000000000
oct: 0, prv: 0, sql: 0000000000000000, psql: 0000000000000000, user: 0/SYS
service name: SYS$USERS
O/S info: user: liping, term: L902054, ospid: 163000:163128, machine:
program: plsqldev.exe
application name: plsqldev.exe, hash value=2126984564
waiting for ‘log file switch (checkpoint incomplete)‘ blocking sess=0x0000000000000000 seq=9 wait_time=0 seconds since wait started=2290
=0, =0, =0
Dumping Session Wait History
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=87117
=0, =0, =0
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=976593
=0, =0, =0
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=979179
=0, =0, =0
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=7099
=0, =0, =0
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=976471
=0, =0, =0
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=976611
=0, =0, =0
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=973437
=0, =0, =0
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=969860
=0, =0, =0
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=976535
=0, =0, =0
for ‘log file switch (checkpoint incomplete)‘ count=1 wait_time=978815
=0, =0, =0
這是一個PL/SQL DEVELOPER的用戶,不過從上面的情況看,這個會話是產生了阻塞大量會話的源頭,不過這個會話本身也是受害者,在等待log file switch(checkpoint incomplete),為什麽會這樣呢?

4.

於是我們來看看lgwr的狀態:


SO: c000000fba6167c0, type: 4, owner: c000000fbc3c2d98, flag: INIT/-/-/0x00
(session) sid: 2155 trans: 0000000000000000, creator: c000000fbc3c2d98, flag: (51) USR/- BSY/-/-/-/-/-
DID: 0001-0024-00000005, short-term DID: 0000-0000-00000000
txn branch: 0000000000000000
oct: 0, prv: 0, sql: 0000000000000000, psql: 0000000000000000, user: 0/SYS
service name: SYS$BACKGROUND
waiting for ‘rdbms ipc message‘ blocking sess=0x0000000000000000 seq=13185 wait_time=0 seconds since wait started=0
timeout=d3, =0, =0
Dumping Session Wait History
for ‘control file sequential read‘ count=1 wait_time=233
file#=0, block#=2e, blocks=1
for ‘control file sequential read‘ count=1 wait_time=210
file#=0, block#=2c, blocks=1
for ‘control file sequential read‘ count=1 wait_time=255
file#=0, block#=29, blocks=1
for ‘control file sequential read‘ count=1 wait_time=213
file#=0, block#=27, blocks=1
for ‘control file sequential read‘ count=1 wait_time=222
file#=2, block#=1, blocks=1
for ‘control file sequential read‘ count=1 wait_time=221
file#=1, block#=1, blocks=1
for ‘control file sequential read‘ count=1 wait_time=282
file#=0, block#=1, blocks=1
for ‘rdbms ipc message‘ count=1 wait_time=4564
timeout=d4, =0, =0
for ‘rdbms ipc message‘ count=1 wait_time=3188
timeout=d4, =0, =0
for ‘rdbms ipc message‘ count=1 wait_time=1
timeout=d4, =0, =0


LGWR居然在等待ipc message,說明lgwr並沒有處於工作狀態,或者無事可做,或者等待其他會話處理結束後喚醒它。而目前大家都在等待log file switch,lgwr沒有閑著沒事做的理由,因此我們有理由相信checkpoint出問題了,lgwr是在等待ckpt完成工作,於是我們繼續檢查ckpt的狀態:


5.

搜索(ckpt)


SO: c000000fbb632130, type: 4, owner: c000000fbb40bc08, flag: INIT/-/-/0x00
(session) sid: 2154 trans: 0000000000000000, creator: c000000fbb40bc08, flag: (51) USR/- BSY/-/-/-/-/-
DID: 0001-0025-00000004, short-term DID: 0001-0025-00000005
txn branch: 0000000000000000
oct: 0, prv: 0, sql: 0000000000000000, psql: 0000000000000000, user: 0/SYS
service name: SYS$BACKGROUND
waiting for ‘enq: PR - contention‘ blocking sess=0xc000000fbf5e4278 seq=61352 wait_time=0 seconds since wait started=62446
name|mode=50520006, 0=0, 0=0
Dumping Session Wait History
for ‘enq: PR - contention‘ count=1 wait_time=429069
name|mode=50520006, 0=0, 0=0


for ‘enq: PR - contention‘ count=1 wait_time=2929688
name|mode=50520006, 0=0, 0=0
for ‘enq: PR - contention‘ count=1 wait_time=2929668
name|mode=50520006, 0=0, 0=0
for ‘enq: PR - contention‘ count=1 wait_time=2929682
name|mode=50520006, 0=0, 0=0
for ‘enq: PR - contention‘ count=1 wait_time=2929671
name|mode=50520006, 0=0, 0=0
for ‘enq: PR - contention‘ count=1 wait_time=2929679
name|mode=50520006, 0=0, 0=0
for ‘enq: PR - contention‘ count=1 wait_time=2929666
name|mode=50520006, 0=0, 0=0
for ‘enq: PR - contention‘ count=1 wait_time=2929683
name|mode=50520006, 0=0, 0=0
for ‘enq: PR - contention‘ count=1 wait_time=2929624
name|mode=50520006, 0=0, 0=0
for ‘enq: PR - contention‘ count=1 wait_time=2929676
name|mode=50520006, 0=0, 0=0

ckpt一直在等待PR鎖,這回我們終於看到了blocking sess=0xc000000fbf5e4278 ,這個會話就是阻塞ckpt的元兇,我們來看看是誰:


SO: c000000fbf5e4278, type: 4, owner: c000000fbe3c1910, flag: INIT/-/-/0x00
(session) sid: 2150 trans: 0000000000000000, creator: c000000fbe3c1910, flag: (100051) USR/- BSY/-/-/-/-/-
DID: 0001-0029-00000002, short-term DID: 0001-0029-00000003
txn branch: 0000000000000000
oct: 0, prv: 0, sql: c000000fa3ed29b8, psql: 0000000000000000, user: 0/SYS
service name: SYS$BACKGROUND
last wait for ‘os thread startup‘ blocking sess=0x0000000000000000 seq=59956 wait_time=1006468 seconds since wait started=62472
=0, =0, =0
我們又找到了mmon,原來mmon才是導致系統HANG住的元兇。

6.

案例總結:


1、到目前為止,我們弄清楚了系統HANG住的主要原因是MMON啟動m000的時候,m000沒能夠正常啟動,導致mmon長期持有pr鎖,阻塞了ckpt,導致無法正常做日誌切換,但是mmon未能正常啟動m000的原因還沒有找到,這需要更多的資料進行分析,不在本文討論之內


2、碰到這種問題如何處理?實際上如果我們分析清楚了,只要殺掉mmon進程,或者調整一下一下statistics_level就可能解決問題,而不需要重啟實例。但是這個分析需要時間,最起碼也需要30-60分鐘的分析時間,如果在生產環境下發生,很難給我們這麽長的分析時間。但是大家要註意這個問題的出現時在4點多鐘,系統HANG住是在10點多鐘,中間有很長時間供我們分析。如果我們能及時發現這個異常,是可能在系統HANG住錢解決問題的


本文出自 “我主梁緣” 博客,請務必保留此出處http://xiaocao13140.blog.51cto.com/6198256/1970622

WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!的分析