1. 程式人生 > >oracle 11G R2 RAC 心跳網路異常處理

oracle 11G R2 RAC 心跳網路異常處理

概述:在rac中心跳網路承載著重要的作用,當心跳網路的ip配置不正確亦或網絡卡名字不對都將使得rac啟動等異常;運氣不好時,心跳網絡卡都壞了,更換了一個網絡卡後是不是隻能重灌解決呢?

實驗內容:

1、心跳網路IP網段配置錯誤;

2、心跳網路網絡卡更改或者是修改不正確;

實驗一:

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.OCR.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.registry.acfs
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cube.db
      1        ONLINE  ONLINE       rac1                     Open                
      2        ONLINE  ONLINE       rac2                     Open                
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac1    

[[email protected] ~]# /u01/app/11.2.0/grid/bin/oifcfg getif
eth1  100.100.100.0  global  cluster_interconnect
bond0  192.168.100.0  global  public


[[email protected] ~]# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 08:00:27:F7:67:C6  
          inet addr:100.100.100.100  Bcast:100.100.100.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fef7:67c6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15346 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39482 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:7218263 (6.8 MiB)  TX bytes:40006495 (38.1 MiB)

[[email protected] ~]# /u01/app/11.2.0/grid/bin/oifcfg setif -global eth1/100.100.200.0:cluster_interconnect
[[email protected]ac1 ~]# /u01/app/11.2.0/grid/bin/oifcfg getif
eth1  100.100.100.0  global  cluster_interconnect
bond0  192.168.100.0  global  public
eth1  100.100.200.0  global  cluster_interconnect
[[email protected] ~]# /u01/app/11.2.0/grid/bin/oifcfg delif -global eth1/100.100.100.0:cluster_interconnect
[

[email protected] ~]# /u01/app/11.2.0/grid/bin/oifcfg getif
bond0  192.168.100.0  global  public
eth1  100.100.200.0  global  cluster_interconnect

重新啟動群集

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'rac1'
CRS-2673: Attempting to stop 'ora.cube.db' on 'rac1'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac1'
CRS-2677: Stop of 'ora.cvu' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'
CRS-2674: Start of 'ora.cvu' on 'rac2' failed
CRS-2674: Start of 'ora.scan1.vip' on 'rac2' failed
CRS-2677: Stop of 'ora.cube.db' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.registry.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.OCR.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl start has
CRS-4123: Oracle High Availability Services has been started.

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stat res -t 
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.

檢視叢集啟動日誌:

[/u01/app/11.2.0/grid/bin/orarootagent.bin(6592)]CRS-5818:Aborted command 'start' for resource 'ora.cluster_interconnect.haip'. Details at (:CRSAGF00113:) {0:0:2} in /u01/app/11.2.0/grid/log/rac1/agent/ohasd/orarootagent_root/orarootagent_root.log.
2018-01-26 09:16:47.206: 
[ohasd(6215)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.cluster_interconnect.haip'. Details at (:CRSPE00111:) {0:0:2} in /u01/app/11.2.0/grid/log/rac1/ohasd/ohasd.log

vi /u01/app/11.2.0/grid/log/rac1/ohasd/ohasd.log

85ed0 [0000000000000010] { gipchaContext : host 'rac1', name 'CLSFRAME_cluster', luid '31303692-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd
2018-01-26 09:17:58.681: [GIPCHDEM][3556767488]gipchaDaemonInfRequest: sent local interfaceRequest,  hctx 0x1e85ed0 [0000000000000010] { gipchaContext : host 'rac1', name 'CLSFRAME_cluster', luid '31303692-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd
2018-01-26 09:18:04.689: [GIPCHDEM][3556767488]gipchaDaemonInfRequest: sent local interfaceRequest,  hctx 0x1e85ed0 [0000000000000010] { gipchaContext : host 'rac1', name 'CLSFRAME_cluster', luid '31303692-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd
2018-01-26 09:18:09.698: [GIPCHDEM][3556767488]gipchaDaemonInfRequest: sent local interfaceRequest,  hctx 0x1e85ed0 [0000000000000010] { gipchaContext : host 'rac1', name 'CLSFRAME_cluster', luid '31303692-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd
2018-01-26 09:18:14.714: [GIPCHDEM][3556767488]gipchaDaemonInfRequest: sent local interfaceRequest,  hctx 0x1e85ed0 [0000000000000010] { gipchaContext : host 'rac1', name 'CLSFRAME_cluster', luid '31303692-00000000', numNode 0, numInf 0, usrFlags 0x0, flags 0x63 } to gipcd

種種日誌表明,rac的心跳網路出現異常,此時我們可以藉助gpnptool  工具進行修改心跳網路的配置:

檢視心跳網路配置資訊:

[[email protected] ~]# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 08:00:27:F7:67:C6  
          inet addr:100.100.100.100  Bcast:100.100.100.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fef7:67c6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:39324 errors:0 dropped:0 overruns:0 frame:0
          TX packets:63992 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:21550499 (20.5 MiB)  TX bytes:54030781 (51.5 MiB

檢視gpnp profile的配置資訊:

[[email protected] gpnp]$ gpnptool get
Warning: some command line parameters were defaulted. Resulting command line: 
         /u01/app/11.2.0/grid/bin/gpnptool.bin get -o-


<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="20" ClusterUId="afd4fb8a969e6ff0ff47999b3fa206d6" ClusterName="cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" Adapter="bond0" IP="192.168.100.0" Use="public"/><gpnp:Network id="net2" Adapter="eth1" IP="100.100.200.0" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/asm*" SPFile="+OCR/cluster/asmparameterfile/registry.253.963054397"/><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>/+xj6ziNHXD2LxYcgnH83yVYWns=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>mh8+ulXRy6AgsgBB3AwJ6zdfmZzPTuvHbehpK97FMerHluIoQpmZmsKR0kwkrU03Bx8myEjwRfgz1XMJCZoNwAlvRfihxvQnV8kPAHRdrVOcz+HcYw/yvfkCS7WQJUTXpZwHzqM04xtmP5BgadE5AOxyDuI+hNfcgDeR24V0UhY=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>
Success.

對比發現實際網絡卡的心跳網路ip是100.100.100.X而profile 檔案中的記錄是100.100.200.X。

以排他模式及不啟動crsd程序啟動crs

[[email protected] ~]# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -unlock
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully unlock /u01/app/11.2.0/grid
[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded

CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2676: Start of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-5017: The resource action "ora.cluster_interconnect.haip start" encountered the following error: 
Start action for HAIP aborted. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/rac1/agent/ohasd/orarootagent_root/orarootagent_root.log".
CRS-2674: Start of 'ora.cluster_interconnect.haip' on 'rac1' failed
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-4000: Command Start failed, or completed with errors.

[[email protected] ~]$ mkdir gpnp
[[email protected] ~]$ gpnptool get -o=/home/grid/gpnp/p.xml
Resulting profile written to "/home/grid/gpnp/p.xml".
Success.
[[email protected] ~]$ cd gpnp/

[[email protected] gpnp]$ cp p.xml profile.xml

[[email protected] gpnp]$ gpnptool getpval -p=/home/grid/gpnp/p.xml -prf_sq -o-
20

[[email protected] gpnp]$ gpnptool getpval -p=/home/grid/gpnp/p.xml -net -o-
net1 net2

[[email protected] gpnp]$ gpnptool edit -p=/home/grid/gpnp/p.xml -o=/home/grid/gpnp/p.xml -ovr -prf_sq=21 -net2:net_ip=100.100.100.0
Resulting profile written to "/home/grid/gpnp/p.xml".
Success.

檢視修改成功沒:

[[email protected] gpnp]$ cat p.xml 
<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="21" ClusterUId="afd4fb8a969e6ff0ff47999b3fa206d6" ClusterName="cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" Adapter="bond0" IP="192.168.100.0" Use="public"/><gpnp:Network id="net2" Adapter="eth1" IP="100.100.100.0" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/asm*" SPFile="+OCR/cluster/asmparameterfile/registry.253.963054397"/><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>/+xj6ziNHXD2LxYcgnH83yVYWns=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>mh8+ulXRy6AgsgBB3AwJ6zdfmZzPTuvHbehpK97FMerHluIoQpmZmsKR0kwkrU03Bx8myEjwRfgz1XMJCZoNwAlvRfihxvQnV8kPAHRdrVOcz+HcYw/yvfkCS7WQJUTXpZwHzqM04xtmP5BgadE5AOxyDuI+hNfcgDeR24V0UhY=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>

[[email protected] gpnp]$ gpnptool getpval -p=/home/grid/gpnp/p.xml -prf_sq -o-
21
[[email protected] gpnp]$ gpnptool sign -p=/home/grid/gpnp/p.xml -o=/home/grid/gpnp/p.xml -ovr -w=cw-fs:peer
Resulting profile written to "/home/grid/gpnp/p.xml".
Success.

[[email protected] gpnp]$ gpnptool put -p=/home/grid/gpnp/p.xml 

Success.

[[email protected] gpnp]$ gpnptool find -c=cluster


Found 1 instances of service 'gpnp'.
        mdns:service:gpnp._tcp.local.://rac1:37760/agent=gpnpd,cname=cluster,host=rac1,pid=8419/gpnpd h:rac1 c:cluster
[[email protected] gpnp]$ gpnptool rget -c=cluster
Warning: some command line parameters were defaulted. Resulting command line: 
         /u01/app/11.2.0/grid/bin/gpnptool.bin rget -c=cluster -o-




Found 1 gpnp service instance(s) to rget profile from.


RGET from tcp://rac1:37760 (mdns:service:gpnp._tcp.local.://rac1:37760/agent=gpnpd,cname=cluster,host=rac1,pid=8419/gpnpd h:rac1 c:cluster):


<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="21" ClusterUId="afd4fb8a969e6ff0ff47999b3fa206d6" ClusterName="cluster" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" Adapter="bond0" IP="192.168.100.0" Use="public"/><gpnp:Network id="net2" Adapter="eth1" IP="100.100.100.0" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/asm*" SPFile="+OCR/cluster/asmparameterfile/registry.253.963054397"/><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>fF43VW1hAMTBFOZ+tuw3qbQSQDo=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>pYENVeL1XgZ2/fYmDY7xxMw+qY4AOYu+RApbJtVfIUA2muPuDmKVLYDddQOrX0XfwRdS3fFdO77cJuv1ApFUEQIDVjQdxVnzQWK+QhJUOvfsl1oE1g+rMNuJq3S6VWoRXLt4pr4wzY7CBkKdnoCdqsRu6u3yECWFiECUr6guk2Y=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>
Success.

重新啟動群集

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl start has
CRS-4123: Oracle High Availability Services has been started.

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.OCR.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.registry.acfs
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cube.db
      1        ONLINE  ONLINE       rac1                     Open                
      2        ONLINE  ONLINE       rac2                     Open                
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac1

總結:這篇文章針對修改心跳ip修改錯誤後導致群集啟動不了,進而通過gpnptool 工具來實現。如果是網絡卡修改的改進,套路一樣自行測試吧。

相關推薦

oracle 11G R2 RAC 心跳網路異常處理

概述:在rac中心跳網路承載著重要的作用,當心跳網路的ip配置不正確亦或網絡卡名字不對都將使得rac啟動等異常;運氣不好時,心跳網絡卡都壞了,更換了一個網絡卡後是不是隻能重灌解決呢? 實驗內容: 1、心跳網路IP網段配置錯誤; 2、心跳網路網絡卡更改或者是修改不正確;

oracle 11g r2 rac 安裝整理 附詳細步驟(親測VMware和exsi都可以完美安裝物理機自然沒有問題)

oracle 11g r2 rac由於前面安裝了,由於時間關系沒有來得及整理,今天閑下來,整理了安裝步驟,還是活的一些收獲的,下面附上步驟:1.安裝操作系統最小化安裝即可2.關閉防火墻3.替換yum4.添加共享磁盤5.創建用戶和用戶組6.添加用戶環境變量7.調整內核參數8.安裝依賴包9.配置hosts10.

Oracle 11G R2 RAC中的scan ip 的用途和基本原理【轉】

partition lease 重試 方便 documents 簡單的 scrip html reserve Oracle 11G R2 RAC增加了scan ip功能,在11.2之前,client鏈接數據庫的時候要用vip,假如你的cluster有4個節點,那麽客戶端的t

Oracle 11g R2 RAC安裝規劃

del ica scan 重啟 use 用戶輸入 cal ipv 創建 前言 使用虛擬機VMWARE安裝Oracle 11g R2 RAC,需要模擬兩個主機節點和一個共享存儲,安裝系統和創建虛擬存儲文件這裏不作介紹,可以自行百度方法,很簡單。 一、主機規劃

通過RMAN將Oracle 11g R2 RAC數據遷移到單實例數據庫

oracle一、準備單實例數據庫服務器1、準備操作系統註:建議安裝oracle數據庫使用oracle linux操作系統,本實例以oracle linux 6.8做為操作系統。#關閉iptableschkconfig iptables off#關閉selinuxvim /etc/sysconfig/selin

Oracle Linux 6.4安裝Oracle 11g R2+RAC+ASM圖文詳解

    安裝Oracle RAC    打補丁到最新版本    完成安裝後的除錯三、詳細安裝過程及說明(參考官方文件)1.通過SecureCRT或TerminalX建立命令列連線。2.在每一個節點上新增安裝Oracle Grid的使用者、組和家目錄,並設定許可權。 # /usr/sbin/groupad

Oracle 11g R2+RAC+ASM+OracleLinux6.4安裝詳解(圖)

安裝Oracle RAC 打補丁到最新版本 完成安裝後的除錯 三、詳細安裝過程及說明(參考官方文件)1.通過SecureCRT或TerminalX建立命令列連線。2.在每一個節點上新增安裝Oracle Grid的使用者、組和家目錄,並設定許可權。 點選(此處)摺疊或開啟 # /usr/sbi

VM VirtualBox Centos6.5安裝Oracle 11g r2 RAC

1 RAC基本概念1.1RACRAC是RealApplication Clusters的縮寫,是Oracle資料庫的一個元件。通過使用RAC,Oracle資料庫可跨一組叢集伺服器執行任何打包的或自定義的應用程式,不需對這些應用程式做任何改動。1.2 ASMASM是Automa

vmware server 2.0 + redhat5.4 + oracle 11g r2 rac 安裝文件!

1、下載所需要的軟體 虛擬機器軟體:VMware-server-2.0.2-203138.exe、vmware-vmrc-win32-x86.exe Linux作業系統:rhel-server-5.4-i386-dvd.iso oracle叢集軟體:linux_11gR2_

Oracle 11g R2+RAC+ASM+redhat安裝詳解1

Oracle RAC是Oracle Real Application Cluster的簡寫,官方中文文件一般翻譯為“真正應用叢集”,它一般有兩臺或者兩臺以上同構計算機及共享儲存裝置構成,可提供強大的資料庫處理能力,現在是Oracle 10g Grid應用的重要組成部分。R

ORACLE 11G R2 RAC+Mutilpath+RAW+ASM+Silent+AddNode+DG 完全安裝詳解 (二)

################################# 磁碟管理 ################################# #磁碟規劃(生產庫只需要按比例擴大 DATA 和 FRA) #CRS        1G            (

Oracle 11g R2 常見問題處理

最好 sed 時報 ces ogr 警告日誌 data parameter rdbms --======================查詢Oracle錯誤日誌和警告日誌通過命令查看錯誤日誌目錄SQL> show parameter background_dump_d

Oracle linux 6.3 安裝11g R2 RAC on vbox

  1 安裝系統 Virtual box 4.3 Oracle linux 6.3 Oracle 11g r2      Make sure "Adapter 1" is enabled, set to "Bridged Adapt

Oracle linux 6.3 安裝11g R2 RAC on vbox

yum automatic npr 127.0.0.1 script hard disk 創建 使用 generated 1 安裝系統 Virtual box 4.3 Oracle linux 6.3 Oracle 11g r2 Make sure

Oracle 11g R2 Active Dataguard 主庫增加表空間的處理方法

1.主庫新增新的表空間,備庫中的STANDBY_FILE_MANAGEMENT引數設定為AUTO 2.主庫:192.168.14.112 3.備庫: 192.168.14.111 檢視備庫是否為STANDBY_FILE_MANAGEMENT引數是否為AUTO SQL>

Oracle 11g R2 日誌結構總結

oracle 11g 日誌Oracle從11g開始,對日誌結構做了比較大的調整,日誌存放目錄和存儲格式有都有了變化。下面總結了Oracle Database、Oracle監聽、ASM和Oracle GI各日誌的存放目錄和目錄結構。1、Oracle Database日誌Oracle Database日誌存放在$

ORACLE 11G DB RAC ORA-00257archiver error解決辦法

orcale11g rac   ora00257   處理asm磁盤空間不足問題ORA-00257archiver error解決辦法1.之前有處理單機過oracle 11.2.0.4歸檔日誌磁盤空間不足的問題 ,但是沒有處理過ORACLE RAC的歸檔日誌磁盤空間不足的問題 所以沒

CentOS上oracle 11g R2數據庫安裝折騰記

現在 unknown product 密碼登錄 緩沖 libstdc 字節 虛擬機 命令 1.虛擬機上centos鏡像的獲取。這裏推薦網易鏡像站中的CentOS7版本(其他開源鏡像站亦可)。這裏給出鏈接: http://mirrors.163.com/centos/7.3.

centos 7 下面安裝oracle 11g r2 過程分享

java blog tails 方法 執行 article 訪問外網 修改 並且 本人對LINUX等很多還不熟悉,如果有不對的地方還請各位指正。謝謝。 打算學習下ORACLE,RMAN備份與還原功能,所以安裝了虛擬機,用的是centos7 X86_64-1611版本,ora

oracle 12c r2 rac + ORA-28040

ora-28040一、環境 服務端: oracle 12.2.0.1 rac 客戶端: 小於 oracle 11.2.0.3二、現象 當使用小於 oracle 11.2.0.3客戶端版本連接oracle 12.2.0.1 rac 數據庫報“ORA-28040: No matching authen