1. 程式人生 > >修改RAC的IP地址

修改RAC的IP地址

1.   準備工作

確認當前RAC狀態

[[email protected] ~]$ crsctl status res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATA.dg

               ONLINE  ONLINE       db1                                         

               ONLINE  ONLINE       db2                                         

ora.LISTENER.lsnr

               ONLINE  ONLINE       db1                                          

               ONLINE  ONLINE       db2                                         

ora.asm

               ONLINE  ONLINE       db1                      Started            

               ONLINE  ONLINE       db2                      Started            

ora.eons

               ONLINE  ONLINE       db1                                         

               ONLINE  ONLINE       db2                                         

ora.gsd

               OFFLINE OFFLINE      db1                                         

               OFFLINE OFFLINE      db2                                         

ora.net1.network

               ONLINE  ONLINE       db1                                          

               ONLINE  ONLINE       db2                                         

ora.ons

               ONLINE  ONLINE       db1                                         

               ONLINE  ONLINE       db2                                          

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       db1                                          

ora.db.db

      1        ONLINE  ONLINE       db1                      Open               

      2        ONLINE  ONLINE       db2                      Open               

ora.db1.vip

      1        ONLINE  ONLINE       db1                                         

ora.db2.vip

      1        ONLINE  ONLINE       db2                                         

ora.oc4j

      1        OFFLINE OFFLINE                                                  

ora.scan1.vip

      1        ONLINE  ONLINE       db1                                        

修改前IP配置

[[email protected] ~]$ cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               odd.up.com odd localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

192.168.1.161           db1.up.com db1

192.168.1.162           db2.up.com db2

10.0.1.161              db1-priv.up.com db1-priv

10.0.1.162              db2-priv.up.com db2-priv

192.168.1.163           db1-vip.up.com db1-vip

192.168.1.164           db2-vip.up.com db2-vip

192.168.1.165           db-cluster

修改前IP

修改後IP

db1 Public IP

192.168.1.161

20.0.1.161

db2 Public IP

192.168.1.162

20.0.1.162

db1-vip IP

192.168.1.163

20.0.1.163

db2-vip IP

192.168.1.164

20.0.1.164

db1-priv IP

10.0.1.161

100.0.1.161

db2-priv IP

10.0.1.162

100.0.1.162

db-cluster

192.168.1.165

20.0.1.165

2.   具體的操作

2.1. 操作前準備工作

停止兩邊節點資料庫,監聽,並且停止crs

所有節點上禁止資料庫啟動,停止資料庫(2個節點)

[[email protected] ~]# srvctl disable database -d db

[[email protected] ~]# srvctl stop    database -d db

禁止所有節點的LISTNER的啟動,停止所有節點上的LISTENER2個節點)

[[email protected] ~]# srvctl disable listener

[[email protected] ~]# srvctl stop listener

禁止所有節點的VIP的啟動,停止所有節點上的VIP(注意:a.操作VIP的時候提供的/etc/hosts中配置的是VIP的名字;b.只有root使用者才能DISABLE VIP資源)(節點1執行)

[[email protected] ~]# srvctl disable vip -i "db1-vip"

[[email protected] ~]# srvctl disable vip -i "db2-vip"

[[email protected] ~]# srvctl stop vip -n db1

[[email protected] ~]# srvctl stop vip -n db2

禁止所有節點的SCAN_LISTENER的啟動,停止所有節點上的SCAN_LISTENER1個節點執行)

[[email protected] ~]# srvctl disable scan_listener

[[email protected] ~]# srvctl stop    scan_listener

禁止所有節點的SCAN的啟動,停止所有節點上的SCAN(節點1執行)

[[email protected] ~]# srvctl disable scan

[[email protected] ~]# srvctl stop    scan

6 停止節點上的叢集

節點1上執行:

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stop crs

節點2上執行:

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stop crs

修改兩個節點的/etc/hosts檔案

20.0.1.161         db1.up.com db1

20.0.1.162         db2.up.com db2

10.0.1.161         db1-priv.up.com db1-priv

10.0.1.162         db2-priv.up.com db2-priv

20.0.1.163        db1-vip.up.com db1-vip

20.0.1.164        db2-vip.up.com db2-vip

20.0.1.165         db-cluster

注意,第一步,先不修改private IP

利用OS命令,修改網絡卡的IP地址

利用以下命令,在節點1,2上進行相同的操作。

[[email protected] ~]# system-config-network

[[email protected] ~]# ifdown eth0

[[email protected] ~]# ifup eth0

[[email protected] ~]# ifconfig |grep inet

          inet addr:20.0.1.161  Bcast:20.0.1.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fe6c:3749/64 Scope:Link

          inet addr:10.0.1.161  Bcast:10.0.1.255  Mask:255.255.255.0

          inet6 addr: fe80::a00:27ff:fe25:bf57/64 Scope:Link

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

2.2. 修改Public IP

兩邊節點啟動crs,oifcfg 命令修改public ip

節點1啟動crs

[[email protected] bin]# /u01/app/11.2.0/grid/bin/crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

節點2啟動crs

[[email protected] bin]# /u01/app/11.2.0/grid/bin/crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

節點1上操作

[[email protected] bin]# ./oifcfg delif -global eth0

[[email protected] bin]# ./oifcfg setif -global eth0/20.0.1.0:public

[[email protected] bin]# ./oifcfg getif

eth1  10.0.1.0  global  cluster_interconnect

eth0  20.0.1.0  global  public

去節點2確認操作

[[email protected] ~]# /u01/app/11.2.0/grid/bin/oifcfg getif

eth1  10.0.1.0  global  cluster_interconnect

eth0  20.0.1.0  global  public

可以看到節點2也已經變過來了。

修改兩個節點的VIP此時資料庫並沒有起來,如果資料起來了,就應該先關掉

[[email protected] ~]# srvctl config vip -n db1

VIP exists.:db1

VIP exists.: /db1-vip/20.0.1.163/255.255.255.0/eth0

[[email protected] ~]# srvctl config vip -n db2

VIP exists.:db2

VIP exists.: /db2-vip/20.0.1.164/255.255.255.0/eth0

此時檢視VIP的時候,證明已經自動修改過來了。

如果此時檢視到VIP還是原來的地址的話,就需要進行以下操作

root使用者

停止vip 服務和修改vip

srvctl stop listener -n db1

srvctl stop listener -n db2

srvctl stop vip -n db1

srvctl stop vip -n db2

srvctl modify nodeapps -n db1 -A 20.0.1.163/255.255.255.0/eth0

srvctl modify nodeapps -n db2 -A 20.0.1.164/255.255.255.0/eth0

再次啟動VIP

srvctl start listener -n db1

srvctl start listener -n db2

srvctl start vip -n db1

srvctl start vip -n db2

然後再去確認VIP的狀態

修改local_listener引數

這一步在我的實驗中不需要操作,我的資料庫的版本是11.2.0.3 ,在最後做完所有操作,啟動資料庫的時候,能在alert日誌裡看到如下內容

Completed: ALTER DATABASE OPEN

Wed Mar 19 12:51:52 2014

ALTER SYSTEM SET local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=db1-vip)(PORT=1521))))' SCOPE=MEMORY SID='db1';

Wed Mar 19 12:52:05 2014

自動註冊了

SQL> show parameter local_listener

NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

local_listener                       string

首先檢視兩個節點的local_listener,我這個RAC環境中。沒有設定local_listener,如果設定了,此時可以看到listener的地址地址還是原VIP地址,需要進行修改,命令如下:

alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.163)(PORT=1521))))' scope=both sid='db1';

alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.164)(PORT=1521))))' scope=both sid='db2';

2.3. 修改SCAN IP

操作如下

[[email protected] ~]# srvctl config scan

SCAN name: db-cluster, Network: 1/192.168.1.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /192.168.1.165/192.168.1.165

You have new mail in /var/spool/mail/root

[[email protected] ~]# srvctl status scan

SCAN VIP scan1 is disabled

SCAN VIP scan1 is not running

[[email protected] ~]# srvctl status scan_listener

SCAN Listener LISTENER_SCAN1 is disabled

SCAN listener LISTENER_SCAN1 is not running

[[email protected] ~]# srvctl modify scan -n db-cluster

[[email protected] ~]# srvctl config scan

SCAN name: db-cluster, Network: 1/192.168.1.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /db-cluster/20.0.1.165

[[email protected] ~]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               odd.up.com odd localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

20.0.1.161         db1.up.com db1

20.0.1.162         db2.up.com db2

10.0.1.161         db1-priv.up.com db1-priv

10.0.1.162         db2-priv.up.com db2-priv

20.0.1.163        db1-vip.up.com db1-vip

20.0.1.164        db2-vip.up.com db2-vip

20.0.1.165         db-cluster

[[email protected] ~]# srvctl modify scan -n 20.0.1.165

[[email protected] ~]# srvctl config scan

SCAN name: 20.0.1.165, Network: 1/192.168.1.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /db-cluster/20.0.1.165

此時注意到網段還是192.168.1.0的網段,應該這種修改是不對的

經過試驗發現scan中的subnet依賴於資源ora.net1.networkUSR_ORA_SUBNET屬性,所以修改SCAN前先修改該屬性修改資源ora.net1.networkUSR_ORA_SUBNET屬性為新的網路號

[[email protected] ~]# crsctl modify res "ora.net1.network"  -attr "USR_ORA_SUBNET=20.0.1.0"

[[email protected] ~]# srvctl modify scan -n db-cluster

修改db-cluster的值,srvctl只提供了一個用域名來修改scan配置的選項,猜測Oracle是通過DNS來獲取對應的IP從而實現配置的

[[email protected] ~]# srvctl config scan

SCAN name: db-cluster, Network: 1/20.0.1.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /db-cluster/20.0.1.165

此時節點2也變過來了

[[email protected] ~]# srvctl config scan

SCAN name: db-cluster, Network: 1/20.0.1.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /db-cluster/20.0.1.165

啟動scan scan_listener

[[email protected] ~]# srvctl enable scan

[[email protected] ~]# srvctl start scan

[[email protected] ~]# srvctl enable scan_listener

[[email protected] ~]# srvctl start scan_listener

將第一步準備工作禁止啟動的服務改成自啟動狀態

[[email protected] ~]# srvctl enable vip -i "db1-vip"

[[email protected] ~]# srvctl enable vip -i "db2-vip"

[[email protected] ~]# srvctl start vip -n db1

[[email protected] ~]# srvctl start vip -n db2

[[email protected] ~]# srvctl enable listener

[[email protected] ~]# srvctl start listener

可以看到,現在叢集狀態已經正常

[[email protected] ~]# crsctl status res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS      

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.DATA.dg

               ONLINE  ONLINE       db1                                         

               ONLINE  ONLINE       db2                                         

ora.LISTENER.lsnr

               ONLINE  ONLINE       db1                                         

               ONLINE  ONLINE       db2                                         

ora.asm

               ONLINE  ONLINE       db1                      Started             

               ONLINE  ONLINE       db2                      Started            

ora.eons

               ONLINE  ONLINE       db1                                         

               ONLINE  ONLINE       db2                                         

ora.gsd

               OFFLINE OFFLINE      db1                                         

               OFFLINE OFFLINE      db2                                         

ora.net1.network

               ONLINE  ONLINE       db1                                         

               ONLINE  ONLINE       db2                                         

ora.ons

               ONLINE  ONLINE       db1                                          

               ONLINE  ONLINE       db2                                         

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       db1                                         

ora.db.db

      1        OFFLINE OFFLINE                                                  

      2        OFFLINE OFFLINE                                                   

ora.db1.vip

      1        ONLINE  ONLINE       db1                                         

ora.db2.vip

      1        ONLINE  ONLINE       db2                                         

ora.oc4j

      1        OFFLINE OFFLINE                                                  

ora.scan1.vip

      1        ONLINE  ONLINE       db1   

2.4. 修改Private IP

接下來修改private ip

可以看到兩個節點的狀態都是active

[[email protected] ~]# olsnodes -s

db1     Active

db2     Active

首先修改主機的eth1IP地址,然後做接下來的修改。

現在去修改/etc/hosts檔案裡的配置

[[email protected] ~]# vim /etc/hosts

[[email protected] ~]# cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               odd.up.com odd localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

20.0.1.161         db1.up.com db1

20.0.1.162         db2.up.com db2

20.0.1.163        db1-vip.up.com db1-vip

20.0.1.164        db2-vip.up.com db2-vip

20.0.1.165         db-cluster

100.0.1.161         db1-priv.up.com db1-priv

100.0.1.162         db2-priv.up.com db2-priv

可以見IP地址還是原來的地址

[[email protected] ~]# oifcfg getif

eth1  10.0.1.0  global  cluster_interconnect

eth0  20.0.1.0  global  public

在同一個網絡卡上設定新的IP地址

[[email protected] ~]# oifcfg setif -global eth1/100.0.1.0:cluster_interconnect

[[email protected] ~]# oifcfg getif

eth1  10.0.1.0  global  cluster_interconnect

eth0  20.0.1.0  global  public

eth1  100.0.1.0  global  cluster_interconnect

[[email protected] ~]# oifcfg delif -global eth1/10.0.1.0

[[email protected] ~]# oifcfg getif

eth0  20.0.1.0  global  public

eth1  100.0.1.0  global  cluster_interconnect

參考命令

不是新網絡卡的命令

oifcfg getif

oifcfg setif -global eth1/100.0.1.0:cluster_interconnect

oifcfg delif -global eth1/10.0.1.0

oifcfg getif

新網絡卡的命令

oifcfg getif

oifcfg setif -global eth3/100.0.1.0:cluster_interconnect

oifcfg delif -global eth1

oifcfg getif

啟動資料庫

[[email protected] ~]# srvctl enable database -d db

[[email protected] ~]# srvctl start database -d db

再次確認叢集的狀態

在操作過程中由於我沒有修改網絡卡的IP地址後,直接修改了private ip,導致重啟crs後報如下錯誤

2014-03-19 12:13:25.446: [ CRSMAIN][1624389360] Checking the OCR device

2014-03-19 12:13:25.449: [ CRSMAIN][1624389360] Connecting to the CSS Daemon

2014-03-19 12:13:25.477: [ CRSMAIN][1624389360] Initializing OCR

2014-03-19 12:13:25.483: [  OCRAPI][1624389360]clsu_get_private_ip_addr: Calling clsu_get_private_ip_addresses to get first private ip

2014-03-19 12:13:25.483: [  OCRAPI][1624389360]Check namebufs

2014-03-19 12:13:25.483: [  OCRAPI][1624389360]Finished checking namebufs

2014-03-19 12:13:25.485: [    GIPC][1624389360] gipcCheckInitialization: possible incompatible non-threaded init from [clsinet.c : 3229], original from [clsss.c : 5011]

2014-03-19 12:13:25.490: [    GPnP][1624389360]clsgpnp_Init: [at clsgpnp0.c:404] gpnp tracelevel 3, component tracelevel 0

2014-03-19 12:13:25.490: [    GPnP][1624389360]clsgpnp_Init: [at clsgpnp0.c:534] '/u01/app/11.2.0/grid' in effect as GPnP home base.

2014-03-19 12:13:25.501: [    GIPC][1624389360] gipcCheckInitialization: possible incompatible non-threaded init from [clsgpnp0.c : 680], original from [clsss.c : 5011]

2014-03-19 12:13:25.504: [    GPnP][1624389360]clsgpnp_InitCKProviders: [at clsgpnp0.c:3866] Init gpnp local security key providers (2) fatal if both fail

2014-03-19 12:13:25.505: [    GPnP][1624389360]clsgpnp_InitCKProviders: [at clsgpnp0.c:3869] Init gpnp local security key proveders 1 of 2: file wallet (LSKP-FSW)

2014-03-19 12:13:25.506: [    GPnP][1624389360]clsgpnpkwf_initwfloc: [at clsgpnpkwf.c:398] Using FS Wallet Location : /u01/app/11.2.0/grid/gpnp/db1/wallets/peer/

2014-03-19 12:13:25.506: [    GPnP][1624389360]clsgpnp_InitCKProviders: [at clsgpnp0.c:3891] Init gpnp local security key provider 1 of 2: file wallet (LSKP-FSW) OK

2014-03-19 12:13:25.506: [    GPnP][1624389360]clsgpnp_InitCKProviders: [at clsgpnp0.c:3897] Init gpnp local security key proveders 2 of 2: OLR wallet (LSKP-CLSW-OLR)

[   CLWAL][1624389360]clsw_Initialize: OLR initlevel [30000]

2014-03-19 12:13:25.527: [    GPnP][1624389360]clsgpnp_InitCKProviders: [at clsgpnp0.c:3919] Init gpnp local security key provider 2 of 2: OLR wallet (LSKP-CLSW-OLR) OK

2014-03-19 12:13:25.527: [    GPnP][1624389360]clsgpnp_getCK: [at clsgpnp0.c:1950] <Get gpnp security keys (wallet) for id:1,typ;7. (2 providers - fatal if all fail)

2014-03-19 12:13:25.527: [    GPnP][1624389360]clsgpnpkwf_getWalletPath: [at clsgpnpkwf.c:498] req_id=1 ck_prov_id=1 wallet path: /u01/app/11.2.0/grid/gpnp/db1/wallets/peer/

2014-03-19 12:13:25.598: [    GPnP][1624389360]clsgpnpwu_walletfopen: [at clsgpnpwu.c:494] Opened SSO wallet: '/u01/app/11.2.0/grid/gpnp/db1/wallets/peer/cwallet.sso'

2014-03-19 12:13:25.598: [    GPnP][1624389360]clsgpnp_getCK: [at clsgpnp0.c:1965] Result: (0) CLSGPNP_OK. Get gpnp wallet - provider 1 of 2 (LSKP-FSW(1))

2014-03-19 12:13:25.598: [    GPnP][1624389360]clsgpnp_getCK: [at clsgpnp0.c:1982] Got gpnp security keys (wallet).>

2014-03-19 12:13:25.608: [    GPnP][1624389360]clsgpnp_getCK: [at clsgpnp0.c:1950] <Get gpnp security keys (wallet) for id:1,typ;4. (2 providers - fatal if all fail)

2014-03-19 12:13:25.608: [    GPnP][1624389360]clsgpnpkwf_getWalletPath: [at clsgpnpkwf.c:498] req_id=1 ck_prov_id=1 wallet path: /u01/app/11.2.0/grid/gpnp/db1/wallets/peer/

2014-03-19 12:13:25.671: [    GPnP][1624389360]clsgpnpwu_walletfopen: [at clsgpnpwu.c:494] Opened SSO wallet: '/u01/app/11.2.0/grid/gpnp/db1/wallets/peer/cwallet.sso'

2014-03-19 12:13:25.672: [    GPnP][1624389360]clsgpnp_getCK: [at clsgpnp0.c:1965] Result: (0) CLSGPNP_OK. Get gpnp wallet - provider 1 of 2 (LSKP-FSW(1))

2014-03-19 12:13:25.672: [    GPnP][1624389360]clsgpnp_getCK: [at clsgpnp0.c:1982] Got gpnp security keys (wallet).>

2014-03-19 12:13:25.672: [    GPnP][1624389360]clsgpnp_Init: [at clsgpnp0.c:837] GPnP client pid=23803, tl=3, f=0

2014-03-19 12:13:25.770: [  OCRAPI][1624389360]clsu_get_private_ip_addresses: no ip addresses found.

2014-03-19 12:13:25.770: [GIPCXCPT][1624389360] gipcShutdownF: skipping shutdown, count 2, from [ clsinet.c : 1732], ret gipcretSuccess (0)

20