1. 程式人生 > >ORACLE 11gR2 RAC新增刪除(正常及強制)節點操作步驟(刪除篇)

ORACLE 11gR2 RAC新增刪除(正常及強制)節點操作步驟(刪除篇)

ORACLE 11gR2 RAC新增刪除(正常及強制)節點操作步驟(刪除篇)

本文主要轉載 【  http://www.cnxdug.org/?p=2511 】 有部分細節自己實驗新增,再此謝謝前輩。

RAC刪除節點


這裡我們模擬節點可以正常啟動時,正常刪除RAC節點的操作過程以及節點由於遇到硬體故障或其它問題短期內無法啟動時,將其強制從RAC叢集中刪除的過程。

正常刪除RAC節點的操作過程和新增RAC節點的操作過程剛好相反,先刪除資料庫例項,再刪除資料庫軟體(ORACLE_HOME),
再在clusterware層面刪除節點,再刪除GRID軟體(GRID_HOME)。

強制刪除RAC節點的操作過程和正常RAC節點的操作邏輯大致相同,只是細節上有所不同。

這裡我們以正常刪除racdb1和強制racdb2節點為例進行實驗。

關於停機時間:

刪除RAC節點操作也不需要停機,可以線上操作,不需要申請停機時間。當然,如果條件允許,為了避免在操作過程中出現異常從而導致整個叢集執行異常,
也可以適當申請小時級別的申請時間。

正常刪除RAC節點操作過程
移除oracle例項
從RAC資料庫中移除例項,與新增例項一樣,也可以通過dbca工具實現。

對於11gR2的RAC資料庫,分為兩種型別,策略管理型資料庫(policy-managed database)和管理員管理型資料庫(administrator-managed database),
這兩種不同型別的資料庫,刪除例項的方法不同。對於策略管理的資料庫操作起來比較複雜,可以參考如下官方文件:

https://docs.oracle.com/cd/E18283_01/rac.112/e16795/adddelunix.htm#CEGIEDFF

我們這裡的資料庫是管理員管理型資料庫,按照如下步驟進行操作:

1、解除例項與service的關聯
如果資料庫例項跟某個service進行了關聯,那麼使用srvctl工具或em先對service進行relocate操作,然後修改service,
使其只在不計劃刪除的節點上執行。確保要刪除的例項既不是某個service的preferred 例項,也不是available 例項。

我們這裡的情況比較簡單,沒有建立任何service,所以也就不需要進行service相關的操作。

2、備份OCR
先看有沒有ocr的自動備份,執行:ocrconfig -showbackup命令檢視,如果沒有,則手動進行備份,以root使用者執行:

ocrconfig -export ocr_racdb3.bak

ocrconfig -manualbackup

3、使用dbca工具,將例項從rac資料庫中移除出去。
有兩種方式,圖形方式和命令列方式,為了加深記憶,這裡我們分別用兩種方式演示。

方式一:圖形方式:

以oracle使用者呼叫dbca:

下一步:

下一步:

下一步:

輸入使用者名稱”sys”及sys使用者的密碼,然後下一步:

下一步:

點選”Finish”:

點選”OK”:

點選”OK”:

刪除進度33%,繼續等待:

點選”No”。


方式二:命令列方式:
命令列的方法,具體命令為:

dbca -silent -deleteInstance [-nodeList node_name] -gdbName gdb_name -instanceName instance_name -sysDBAUserName sysdba -sysDBAPassword password

結合我們這裡的實際情況,進行引數替換:


dbca -silent -deleteInstance -gdbName racdb -instanceName racdb1 -nodelist racdb1 -sysDBAUserName sys -sysDBAPassword password

命令需要在不刪除的節點上、以oracle使用者執行,我們依然在節點3上執行。執行過程記錄如下:


[[email protected] ~]$ dbca -silent -deleteInstance -gdbName racdb -instanceName racdb1 -nodelist racdb1 -sysDBAUserName sys -sysDBAPassword PASSWORD

Deleting instance

20% complete

……

66% complete

Completing instance management.

100% complete

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/racdb.log" for further details


=================================================================================================
=================================================================================================


刪除oracle軟體
1、處理要刪除的節點上的監聽

如果Oracle RAC HOME配置了監聽,且註冊進了clusterware中,則先需要禁用並停掉監聽,以grid使用者,在任意節點上執行下面的命令:

$ srvctl disable listener -l listener_name -n NAME_OF_NODE_TO_BE_DELETED

$ srvctl stop listener -l listener_name -n NAME_OF_NODE_TO_BE_DELETED

我們這裡沒有單獨配置監聽,只有預設的監聽,我們將其禁用並停掉,在節點3上以grid使用者執行:

[[email protected] ~]$ srvctl disable listener -l listener -n racdb3

[[email protected] ~]$ srvctl stop listener -l listener -n racdb3

2、在要刪除的節點上更新NodeList

在要刪除的節點上以oracle使用者執行:

$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location "CLUSTER_NODES={name_of_node_to_delete}" -local

結合我們這裡的實際情況,對具體的引數進行替換,對應的實際命令為:

在oracle 使用者下執行
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES=racdb3" -local

執行結果記錄如下:


[[email protected] bin]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES=racdb3" -local

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 2881 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[[email protected] bin]$


3、刪除ORACLE_HOME目錄

如果ORACLE_HOME是共享的,則在要刪除的節點上執行:
cd $ORACLE_HOME/oui/bin
./runInstaller -detachHome ORACLE_HOME=Oracle_home_location

如果是非共享的,則執行:
${ORACLE_HOME}/deinstall/deinstall -local


我們這裡是非共享的,按照後一種方法執行:
實際要執行的命令如下:在要刪除的節點上用oracle 使用者執行

[[email protected] ~]$ cd $ORACLE_HOME/deinstall

[[email protected] deinstall]$ ./deinstall -local

[[email protected] bin]$ cd $ORACLE_HOME/deinstall
[[email protected] deinstall]$
[[email protected] deinstall]$
[[email protected] deinstall]$
[[email protected] deinstall]$ ls
bootstrap.pl deinstall deinstall.pl deinstall.xml jlib readme.txt response sshUserSetup.sh
[[email protected] deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/dbhome_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: racdb3
Checking for sufficient temp space availability on node(s) : 'racdb3'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2018-12-30_09-09-04-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2018-12-30_09-09-08-AM.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2018-12-30_09-09-12-AM.log

Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check313.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:racdb3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'racdb3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/dbhome_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
....................需人工介入的部分....................... Do you want to continue (y - yes, n - no)? [n]: y --------這裡需要手動輸入y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-30_09-08-56-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-30_09-08-56-AM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2018-12-30_09-09-12-AM.log

Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2018-12-30_09-09-31-AM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-12-30_09-09-31-AM.log

De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean313.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node : Done

Failed to delete the directory '/u01/app/oracle'. The directory is in use.
Delete directory '/u01/app/oracle' on the local node : Failed <<<<

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-30_09-08-37AM' on node 'racdb3'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node.
Failed to delete directory '/u01/app/oracle' on the local node.
Oracle Universal Installer cleanup completed with errors.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[[email protected] deinstall]$


4、在不刪除的節點上更新inventory

在保留的任意節點,也就是racdb1、racdb2或racdb4上,以oracle使用者執行:

cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location "CLUSTER_NODES={remaining_node_list}"


結合我們這裡的實際情況,進行引數替換,即:

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES= racdb1,racdb2"

我們在racdb1上執行,執行過程記錄如下:

[[email protected] ~]# su - oracle
[[email protected] ~]$ cd $ORACLE_HOME/oui/bin
[[email protected] bin]$
[[email protected] bin]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
[[email protected] bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES= racdb1,racdb2"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 2104 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

=================================================================================================================
接下來處理grid。

將節點移除出RAC叢集
將節點移除出叢集,使其不再是叢集成員。

1、檢視節點的狀態是否為 Unpinned

以grid使用者執行:olsnodes -s -t ,如果不是Unpinned,則以root使用者執行:crsctl unpin css將其unpin。

“olsnodes -s -t”在要刪除的節點上、不刪除的節點上執行均可:

節點1:
[[email protected] ~]# su - grid
[[email protected] ~]$ olsnodes -s -t
racdb1 Active Unpinned
racdb2 Active Unpinned
racdb3 Active Unpinned

節點3:
[[email protected] ~]# su - grid
[[email protected] ~]$ olsnodes -s -t
racdb1 Active Unpinned
racdb2 Active Unpinned
racdb3 Active Unpinned

-----------------------------
[[email protected] ~]$ olsnodes -s -t|grep racdb1

racdb1 Active Unpinned

狀態為Unpinned,但是我們這裡是實驗環境,可以體驗一下unpin命令:

[[email protected] ~]# crsctl unpin css -n racdb3

CRS-4667: Node racdb2 successfully unpinned.

unpin命令在要刪除的節點上或不刪除的節點上執行都可以。


2、對節點進行deconfig操作

在要刪除的節點上以root使用者執行:

cd $GRID_HOME/crs/install

./rootcrs.pl -deconfig -force
執行過程記錄如下:

[[email protected] ~]# cd /u01/app/11.2.0/grid
[[email protected] grid]#
[[email protected] grid]#
[[email protected] grid]# cd crs/install/
[[email protected] install]# ls
cmdllroot.sh crsdelete.pm installRemove.excl paramfile.crs rootofs.sh
crsconfig_addparams crspatch.pm onsconfig ParentDirPerm_racdb1.txt s_crsconfig_defs
crsconfig_addparams.sbs hasdconfig.pl oraacfs.pm ParentDirPerm_racdb3.txt s_crsconfig_lib.pm
crsconfig_lib.pm inittab oracle-ohasd.conf preupdate.sh s_crsconfig_racdb1_env.txt
crsconfig_params install.excl oracle-ohasd.service rootcrs.pl s_crsconfig_racdb3_env.txt
crsconfig_params.sbs install.incl oracss.pm roothas.pl tfa_setup.sh

[[email protected] install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.16.0/255.255.255.0/eth0, type static
VIP exists: /192.168.16.55/192.168.16.55/192.168.16.0/255.255.255.0/eth0, hosting node racdb1
VIP exists: /racdb2-vip/192.168.16.56/192.168.16.0/255.255.255.0/eth0, hosting node racdb2
VIP exists: /racdb3-vip/192.168.16.57/192.168.16.0/255.255.255.0/eth0, hosting node racdb3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racdb3'
CRS-2673: Attempting to stop 'ora.crsd' on 'racdb3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racdb3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'racdb3'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'racdb3'
CRS-2677: Stop of 'ora.DATA.dg' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.OCR.dg' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racdb3'
CRS-2677: Stop of 'ora.asm' on 'racdb3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racdb3' has completed
CRS-2677: Stop of 'ora.crsd' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'racdb3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racdb3'
CRS-2673: Attempting to stop 'ora.evmd' on 'racdb3'
CRS-2673: Attempting to stop 'ora.asm' on 'racdb3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racdb3'
CRS-2677: Stop of 'ora.ctssd' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.crf' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.asm' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racdb3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racdb3'
CRS-2677: Stop of 'ora.cssd' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racdb3'
CRS-2677: Stop of 'ora.gipcd' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racdb3'
CRS-2677: Stop of 'ora.gpnpd' on 'racdb3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racdb3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

注意:如果要刪除的節點是當前整個叢集中最後一個節點,也就是說要將整個叢集刪除掉的話,那麼需要執行:


./rootcrs.pl -deconfig -force -lastnode

3、在其它節點上刪除要刪除的節點(執行刪除動作)

在不刪除的節點上以root使用者執行:

crsctl delete node -n NAME_OF_NODE_TO_BE_DELETED

我們在節點1上執行:

[[email protected] bin]# ./crsctl delete node -n racdb3
CRS-4661: Node racdb3 successfully deleted.


4、在要刪除的節點上更新Nodelist

在要刪除的節點,也就是節點3,上以grid使用者執行下面的命令:

./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={node_to_be_deleted}" CRS=TRUE -silent -local

結合我們這裡的實際情況,進行引數替換,即:

[[email protected] bin]# su - grid
[[email protected] ~]$ cd /u01/app/11.2.0/grid/oui/bin
[[email protected] bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb3" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3066 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

接下來處理GRID_HOME。

刪除GRID軟體
1、刪除GRID_HOME

如果GRID_HOME是共享的,則在要刪除的節點上,以grid使用者執行:

cd $GRID_HOME/oui/

./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local

如果非共享的,以grid使用者執行:

$ Grid_home/deinstall/deinstall -local

注意:這裡一定要加上-local引數,否則的話,這個命令會刪除所有節點的GRID_HOME目錄。

我們屬於後一種情況,這裡我們在節點1上執行,命令執行過程中,除紅色字型部分以外,一路回車即可,執行過程記錄如下:


[[email protected] bin]$ $ORACLE_HOME/deinstall/deinstall -local

....................命令執行過程省略,只記錄重要或需人工介入的部分.......................

Enter an address or the name of the virtual IP used on node "racdb1"[racdb1-vip]

> --可直接回車

Enter the IP netmask of Virtual IP "xxx.xxx.3.53" on node "racdb1"[255.255.255.0]

> --可直接回車

Enter the network interface name on which the virtual IP address "xxx.xxx.3.53" is active

> --可直接回車

Enter an address or the name of the virtual IP[]

> --可直接回車

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]: LISTENER

At least one listener from the discovered listener list available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

……

Do you want to continue (y - yes, n - no)? [n]: y

……

Run the following command as the root user or the administrator on node "racdb1".

……

/tmp/deinstall2018-04-26_00-15-04PM/perl/bin/perl -I/tmp/deinstall2018-04-26_00-15-04PM/perl/lib -I/tmp/deinstall2018-04-26_00-15-04PM/crs/install /tmp/deinstall2018-04-26_00-15-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp" "

Press Enter after you finish running the above commands

……

[[email protected] bin]$ $GRID_HOME/deinstall/deinstall -local

....................命令執行過程省略,只記錄重要或需人工介入的部分.......................

Enter an address or the name of the virtual IP used on node "racdb1"[racdb1-vip]

> --可直接回車

Enter the IP netmask of Virtual IP "xxx.xxx.3.53" on node "racdb1"[255.255.255.0]

> --可直接回車

Enter the network interface name on which the virtual IP address "xxx.xxx.3.53" is active

> --可直接回車

Enter an address or the name of the virtual IP[]

> --可直接回車

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]: LISTENER

At least one listener from the discovered listener list available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

……

Do you want to continue (y - yes, n - no)? [n]: y

……

Run the following command as the root user or the administrator on node "racdb1".

……

/tmp/deinstall2018-04-26_00-15-04PM/perl/bin/perl -I/tmp/deinstall2018-04-26_00-15-04PM/perl/lib -I/tmp/deinstall2018-04-26_00-15-04PM/crs/install /tmp/deinstall2018-04-26_00-15-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp" "

Press Enter after you finish running the above commands

……
根據提示,以root使用者在節點1上另開一個會話執行上述命令:


[[email protected] ~]# /tmp/deinstall2018-04-26_00-15-04PM/perl/bin/perl -I/tmp/deinstall2018-04-26_00-15-04PM/perl/lib -I/tmp/deinstall2018-04-26_00-15-04PM/crs/install /tmp/deinstall2018-04-26_00-15-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file: /tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp

****Unable to retrieve Oracle Clusterware home.

Start Oracle Clusterware stack and try again.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

################################################################

# You must kill processes or reboot the system to properly #

# cleanup the processes started by Oracle clusterware #

################################################################

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall

error: package cvuqdisk is not installed

Successfully deconfigured Oracle clusterware stack on this node


然後回到剛才的會話視窗,按回車繼續執行:


Run the following command as the root user or the administrator on node "racdb1".

/tmp/deinstall2018-04-26_00-15-04PM/perl/bin/perl -I/tmp/deinstall2018-04-26_00-15-04PM/perl/lib -I/tmp/deinstall2018-04-26_00-15-04PM/crs/install /tmp/deinstall2018-04-26_00-15-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp" "

Press Enter after you finish running the above commands -- 此處回車

<----------------------------------------

##### ORACLE DEINSTALL & DECONFIG TOOL END ######

GRID_HOME刪除完成。

=====================================================================================================================================
如下為上述步驟的詳細記錄:
[[email protected] 11.2.0]# find . -name runInstaller
./grid/oui/bin/runInstaller
[[email protected] 11.2.0]# cd ./grid/oui/bin/runInstaller
-bash: cd: ./grid/oui/bin/runInstaller: Not a directory
[[email protected] 11.2.0]# cd ./grid/oui/bin/
[[email protected] bin]# ls
addLangs.sh attachHome.sh filesList.bat filesList.sh resource runInstaller runSSHSetup.sh
addNode.sh detachHome.sh filesList.properties lsnodes runConfig.sh runInstaller.sh
[[email protected] bin]# pwd
/u01/app/11.2.0/grid/oui/bin
[[email protected] bin]# ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb3" CRS=TRUE -silent -local

The user is root. Oracle Universal Installer cannot continue installation if the user is root.
: No such file or directory
[[email protected] bin]# su - grid
[[email protected] ~]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb3" CRS=TRUE -silent -local
-bash: ./runInstaller: No such file or directory
[[email protected] ~]$ cd /u01/app/11.2.0/grid/oui/bin
[[email protected] bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb3" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3066 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[[email protected] bin]$ cd
[[email protected] ~]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2018-12-30_10-00-57AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: racdb3
Checking for sufficient temp space availability on node(s) : 'racdb3'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2018-12-30_10-00-57AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "racdb3"[racdb3-vip]
>

The following information can be collected by running "/sbin/ifconfig -a" on node "racdb3"
Enter the IP netmask of Virtual IP "192.168.16.57" on node "racdb3"[255.255.255.0]
>

Enter the network interface name on which the virtual IP address "192.168.16.57" is active
>

Enter an address or the name of the virtual IP[]
>


Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2018-12-30_10-00-57AM/logs/netdc_check2018-12-30_10-01-44-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2018-12-30_10-00-57AM/logs/asmcadc_check2018-12-30_10-01-55-AM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:racdb3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'racdb3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2018-12-30_10-00-57AM/logs/deinstall_deconfig2018-12-30_10-01-11-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2018-12-30_10-00-57AM/logs/deinstall_deconfig2018-12-30_10-01-11-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2018-12-30_10-00-57AM/logs/asmcadc_clean2018-12-30_10-01-59-AM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2018-12-30_10-00-57AM/logs/netdc_clean2018-12-30_10-01-59-AM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
Stopping listener on node "racdb3": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "racdb3".

/tmp/deinstall2018-12-30_10-00-57AM/perl/bin/perl -I/tmp/deinstall2018-12-30_10-00-57AM/perl/lib -I/tmp/deinstall2018-12-30_10-00-57AM/crs/install /tmp/deinstall2018-12-30_10-00-57AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-12-30_10-00-57AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

這裡需要開一個視窗,在root使用者下執行命令:
/tmp/deinstall2018-12-30_10-00-57AM/perl/bin/perl -I/tmp/deinstall2018-12-30_10-00-57AM/perl/lib -I/tmp/deinstall2018-12-30_10-00-57AM/crs/install /tmp/deinstall2018-12-30_10-00-57AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-12-30_10-00-57AM/response/deinstall_Ora11g_gridinfrahome1.rsp"


Remove the directory: /tmp/deinstall2018-12-30_10-00-57AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Failed to delete the directory '/u01/app/grid'. The directory is in use.
Delete directory '/u01/app/grid' on the local node : Failed <<<<

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-30_10-00-57AM' on node 'racdb3'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "racdb3"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Failed to delete directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup completed with errors.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'racdb3' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'racdb3' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'racdb3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[[email protected] ~]$

=========================================================================================================================================


2、在剩餘節點上更新NodeList

在剩餘的任意節點,也就是節點2、3或4上執行:

cd $GRID_HOME/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent

結合我們這裡的實際情況,進行引數替換,即:


cd $ORACLE_HOME/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb1,racdb1" CRS=TRUE -silent

這裡我們在節點3上執行,執行過程記錄如下:

[[email protected] bin]# su - grid
[[email protected] ~]$ cd $ORACLE_HOME/oui/bin
[[email protected] bin]$
[[email protected] bin]$
[[email protected] bin]$
[[email protected] bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb1,racdb1" CRS=TRUE -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 2076 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.


3、執行刪除後檢查

在節點2、3或4上以grid使用者執行:

cluvfy stage -post nodedel -n deleted_node_list [-verbose]

結合我們這裡的實際情況,進行引數替換,即:

cluvfy stage -post nodedel -n racdb1 -verbose

我們在節點3上執行,執行過程記錄如下:


[g[email protected] bin]$ cluvfy stage -post nodedel -n racdb3 -verbose

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed

……

CRS integrity check passed

Result:

Node removal check passed

Post-check for node removal was successful.

至此racdb1節點的刪除操作完成。

接下來執行刪除racdb2的操作,這裡我們假設節點2由於硬體故障無法啟動,來模擬強制將其從叢集中刪除的操作過程。

=============================================================================================================================================

強制刪除RAC節點操作過程
移除oracle例項
1、如果資料庫配置了service,處理service:

在如果資料庫配置了service,那麼首先處理service,讓其只可能在叢集中剩餘的節點上執行。

這裡我們模擬為資料庫配置的service的名稱為racdbsvc,在實際環境中可以通過”crsctl stat res -t|grep svc”命令檢視。

在叢集中剩餘的任意節點上以oracle使用者執行:

srvctl modify service -d racdb -s racdbsvc -n -i racdb3,racdb4 -f
如果命令成功執行,不會有返回結果,執行完後,對service進行確認:

srvctl status service -d racdb -s racdbsvc

Service racdbsvc is running on instance(s) racdb3,racdb4

2、將例項從racdb資料庫中刪除

對於”Administrator Managed database”,同樣呼叫dbca工具實現,可以通過圖形方式刪除,也可以通過命令列方式刪除,具體操作過程可以參考正常刪除rac節點操作過程,此處略去操作過程記錄。

處理DB軟體
在叢集中剩餘節點上以oracle使用者執行:

cd $ORACLE_HOME/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=${ORACLE_HOME} "CLUSTER_NODES={remainnode1,remainnode2,....}"

結合我們這裡的實際情況進行替換,即:

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 "CLUSTER_NODES={racdb3,racdb4}"

執行過程記錄如下:

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3007 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.
將節點移除出RAC叢集
1、停掉節點VIP:

在叢集中剩餘任意節點上以root使用者執行:

/u01/app/11.2.0.4/grid/bin/srvctl stop vip -i racdb2
2、移除節點VIP:

在叢集中剩餘任意節點上以root使用者執行:

/u01/app/11.2.0.4/grid/bin/srvctl remove vip -i racdb2 -f
3、檢視節點的狀態是否為Unpinned

以grid使用者執行:olsnodes -s -t,如果不是Unpinned,則以root使用者執行:crsctl unpin css將其unpin。

4、剔除RAC節點:

在叢集中剩餘任意節點上以root使用者執行:

[[email protected] ~]# /u01/app/11.2.0.4/grid/bin/crsctl delete node -n racdb2

CRS-4661: Node racdb2 successfully deleted.
處理GI Inventory
在叢集中剩餘任意節點上以grid使用者執行:

cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/grid "CLUSTER_NODES={sage}" CRS=TRUE -silent
結合我們這裡的實際情況進行替換,即:

cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0.4/grid "CLUSTER_NODES={racdb3,racdb4}" CRS=TRUE -silent
命令執行記錄如下:

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3007 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.
最後查詢叢集成員及其狀態:

olsnodes -s

racdb3 Active

racdb4 Active
至此刪除節點的操作完成。

總結
從上面的操作可以看出,RAC新增和刪除節點的操作,並不需要多高的技術能力,但是建議嚴格ORACLE官方文件和METALINK(MOS)文件進行操作,以避免不必要的麻煩。

 

參考資料

1、How to Add Node/Instance or Remove Node/Instance in 10gr2, 11gr1, 11gr2 and 12c Oracle Clusterware and RAC(文件 ID 1332451.1)

2、How to Remove/Delete a Node From Grid Infrastructure Clusterware When the Node Has Failed (文件 ID 1262925.1)

3、https://docs.oracle.com/cd/E14795_01/doc/rac.112/e10717/adddelclusterware.htm#CHDFIAIE

4、https://docs.oracle.com/cd/E18283_01/rac.112/e16795/adddelunix.htm#BEICADHD

5、https://docs.oracle.com/cd/E18283_01/rac.112/e16794/adddelclusterware.htm#CWADD90992

6、http://docs.oracle.com/cd/E11882_01/rac.112/e41959/adddelclusterware.htm#CWADD90992