1. 程式人生 > >oracle 11g r2 rac 安裝整理 附詳細步驟(親測VMware和exsi都可以完美安裝物理機自然沒有問題)

oracle 11g r2 rac 安裝整理 附詳細步驟(親測VMware和exsi都可以完美安裝物理機自然沒有問題)

oracle 11g r2 rac

由於前面安裝了,由於時間關系沒有來得及整理,今天閑下來,整理了安裝步驟,還是活的一些收獲的,下面附上步驟:

1.安裝操作系統最小化安裝即可

2.關閉防火墻

3.替換yum

4.添加共享磁盤

5.創建用戶和用戶組

6.添加用戶環境變量

7.調整內核參數

8.安裝依賴包

9.配置hosts

10.配置ssh 無密碼訪問

11.調整swap

12.配置asm共享磁盤

13.grid安裝預檢

14.grid安裝

15.asm磁盤組創建

16.database安裝

17.數據庫實例創建

18.rac 狀態查看和維護


esxi主機創建共享磁盤:

http://www.linuxfly.org/post/673/


VMware12上面安裝11g rac:

在vmware安裝目錄 創建磁盤:

vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:\VMwarecp\VMWARE\racsharedisk\ocr.vmdk

vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:\VMwarecp\VMWARE\racsharedisk\ocr2.vmdk

vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 E:\VMwarecp\VMWARE\racsharedisk\votingdisk.vmdk

vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 E:\VMwarecp\VMWARE\racsharedisk\data2.vmdk

vmware-vdiskmanager.exe -c -s 5000Mb -a lsilogic -t 2 E:\VMwarecp\VMWARE\racsharedisk\backup.vmdk


編輯rac1 rac2 的vmx文件 添加:

scsi1.present = "TRUE"

scsi1.virtualDev = "lsilogic"

scsi1.sharedBus = "virtual"


scsi2.present = "TRUE"

scsi2.virtualDev = "lsilogic1"

scsi2.sharedBus = "virtual"


scsi1:1.present = "TRUE"

scsi1:1.mode = "independent-persistent"

scsi1:1.filename = "D:\VMWARE\racsharedisk\ocr.vmdk"

scsi1:1.deviceType = "plainDisk"


scsi1:2.present = "TRUE"

scsi1:2.mode = "independent-persistent"

scsi1:2.filename = "D:\VMWARE\racsharedisk\votingdisk.vmdk"

scsi1:2.deviceType = "plainDisk"


scsi1:3.present = "TRUE"

scsi1:3.mode = "independent-persistent"

scsi1:3.filename = "D:\VMWARE\racsharedisk\data.vmdk"

scsi1:3.deviceType = "plainDisk"


scsi1:4.present = "TRUE"

scsi1:4.mode = "independent-persistent"

scsi1:4.filename = "D:\VMWARE\racsharedisk\backup.vmdk"

scsi1:4.deviceType = "plainDisk"


scsi1:5.present = "TRUE"

scsi1:5.mode = "independent-persistent"

scsi1:5.filename = "D:\VMWARE\racsharedisk\ocr2.vmdk"

scsi1:5.deviceType = "plainDisk"


scsi2:2.present = "TRUE"

scsi2:2.mode = "independent-persistent"

scsi2:2.filename = "D:\VMWARE\racsharedisk\data3.vmdk"

scsi2:2.deviceType = "plainDisk"


disk.locking = "false"

diskLib.dataCacheMaxSize = "0"

diskLib.dataCacheMaxReadAheadSize = "0"

diskLib.DataCacheMinReadAheadSize = "0"

diskLib.dataCachePageSize = "4096"

diskLib.maxUnsyncedWrites = "0"


rac1和rac2上都要創建:

1.創建用戶和組:

/usr/sbin/groupadd -g 1000 oinstall

/usr/sbin/groupadd -g 1020 asmadmin

/usr/sbin/groupadd -g 1021 asmdba

/usr/sbin/groupadd -g 1022 asmoper

/usr/sbin/groupadd -g 1031 dba

/usr/sbin/groupadd -g 1032 oper

useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle

mkdir -p /u01/app/11.2.0/grid

mkdir -p /u01/app/grid

mkdir /u01/app/oracle

chown -R grid:oinstall /u01

chown oracle:oinstall /u01/app/oracle

chmod -R 775 /u01/


2.內核參數設置:

[[email protected] ~]# vi /etc/sysctl.conf

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 68719476736

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

net.ipv4.tcp_wmem = 262144 262144 262144

net.ipv4.tcp_rmem = 4194304 4194304 4194304

3.配置oracle、grid用戶的shell限制

[[email protected] ~]# vi /etc/security/limits.conf

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536


4.配置login

[[email protected] ~]# vi /etc/pam.d/login

session required pam_limits.so



5.依賴軟件:

安裝需要的軟件包

binutils-2.20.51.0.2-5.11.el6 (x86_64)

compat-libcap1-1.10-1 (x86_64)

compat-libstdc++-33-3.2.3-69.el6 (x86_64)

compat-libstdc++-33-3.2.3-69.el6.i686

gcc-4.4.4-13.el6 (x86_64)

gcc-c++-4.4.4-13.el6 (x86_64)

glibc-2.12-1.7.el6 (i686)

glibc-2.12-1.7.el6 (x86_64)

glibc-devel-2.12-1.7.el6 (x86_64)

glibc-devel-2.12-1.7.el6.i686

ksh

libgcc-4.4.4-13.el6 (i686)

libgcc-4.4.4-13.el6 (x86_64)

libstdc++-4.4.4-13.el6 (x86_64)

libstdc++-4.4.4-13.el6.i686

libstdc++-devel-4.4.4-13.el6 (x86_64)

libstdc++-devel-4.4.4-13.el6.i686

libaio-0.3.107-10.el6 (x86_64)

libaio-0.3.107-10.el6.i686

libaio-devel-0.3.107-10.el6 (x86_64)

libaio-devel-0.3.107-10.el6.i686

make-3.81-19.el6

sysstat-9.0.4-11.el6 (x86_64)

安裝:

yum install gcc gcc-c++ libaio* glibc* glibc-devel* ksh libgcc* libstdc++* libstdc++-devel* make sysstat unixODBC* compat-libstdc++-33.x86_64 elfutils-libelf-devel glibc.i686 compat-libcap1 smartmontools unzip openssh* parted cvuqdisk -y

##cvuqdisk 這個軟件是在安裝grid的時候 使用fixup.sh 安裝即可,yum安裝不上沒有關系




6.

##註意 rpm安裝一定要在yum安裝之後

rpm -ivh --force --nodeps libaio-0.3.106-5.i386.rpm

rpm -ivh --force --nodeps libaio-devel-0.3.106-5.i386.rpm

rpm -ivh --force --nodeps pdksh-5.2.14-36.el5.i386.rpm

rpm -ivh --force --nodeps libstdc++-4.1.2-48.el5.i386.rpm

rpm -ivh --force --nodeps libgcc-4.1.2-48.el5.i386.rpm

rpm -ivh --force --nodeps unixODBC-devel-2.2.11-7.1.i386.rpm

rpm -ivh --force --nodeps compat-libstdc++-33-3.2.3-61.i386.rpm

rpm -ivh --force --nodeps unixODBC-2.2.11-7.1.i386.rpm


7.配置hosts文件:

192.168.20.220 rac1

192.168.166.220 rac1-priv

192.168.20.223 rac1-vip

192.168.20.221 rac2

192.168.166.221 rac2-priv

192.168.20.224 rac2-vip

192.168.20.222 scan-ip


8.添加環境變量

Oracle_sid需要根據節點不同進行修改

##grid變量

[[email protected] ~]# su - grid

[[email protected] ~]$ vi .bash_profile


##rac1:

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

umask 022


##rac2:

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=+ASM2

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

umask 022


##需要註意的是ORACLE_UNQNAME是數據庫名,創建數據庫時指定多個節點是會創建多個實例,ORACLE_SID指的是數據庫實例名

##oracle 環境變量

[[email protected] ~]# su - oracle

[[email protected] ~]$ vi .bash_profile

#rac1:

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=orcl1

export ORACLE_UNQNAME=orcl

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export TNS_ADMIN=$ORACLE_HOME/network/admin

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib


##rac2:

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=orcl2

export ORACLE_UNQNAME=orcl

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export TNS_ADMIN=$ORACLE_HOME/network/admin

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib


$ source .bash_profile使配置文件生效


9.配置 rac1 rac2 中的 root grid oracle 三個用戶直接相互無密碼ssh訪問

每一個用戶目錄下都要配置:

ssh-keygen -t rsa

ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys

ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys

scp authorized_keys rac2:~/.ssh/


ssh rac1 date

ssh rac2 date

ssh rac1-priv date

ssh rac2-priv date



10.##如果swap太小,swap調整方法:


通過此種方式進行swap 的擴展,首先要計算出block的數目。具體為根據需要擴展的swapfile的大小,以M為單位。block=swap分區大小*1024, 例如,需要擴展64M的swapfile,則:block=64*1024=65536.


然後做如下步驟:

dd if=/dev/zero of=/swapfile bs=1024 count=9216000

#mkswap /swapfile

#swapon /swapfile

#vi /etc/fstab

增加/swapf swap swap defaults 0 0

# cat /proc/swaps 或者# free –m //查看swap分區大小

# swapoff /swapf //關閉擴展的swap分區




11.享磁盤配置:


rac1 和rac2配置共享磁盤 (esxi主機上面 2個總線都要選擇共享才不會報錯):

rac2 需要重啟才可以在 ll /dev/raw 下面看到磁盤

所以只需root.sh 的時候 如果報錯找不到raw磁盤,需要重啟rac2,所以在配置好共享磁盤之後,重啟rac2

在rac1格式化之後,

cat /etc/udev/rules.d/60-raw.rules

# Enter raw device bindings here.

#

# An example would be:

# ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"

# to bind /dev/raw/raw1 to /dev/sda, or

# ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"

# to bind /dev/raw/raw2 to the device with major 8, minor 1.

ACTION=="add",KERNEL=="/dev/sdb1",RUN+=‘/bin/raw /dev/raw/raw1 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"

ACTION=="add",KERNEL=="/dev/sdc1",RUN+=‘/bin/raw /dev/raw/raw2 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"

ACTION=="add",KERNEL=="/dev/sdd1",RUN+=‘/bin/raw /dev/raw/raw3 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"

ACTION=="add",KERNEL=="/dev/sde1",RUN+=‘/bin/raw /dev/raw/raw4 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"

ACTION=="add",KERNEL=="/dev/sdf1",RUN+=‘/bin/raw /dev/raw/raw5 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="81",RUN+="/bin/raw /dev/raw/raw5 %M %m"


KERNEL=="raw[1-5]",OWNER="grid",GROUP="asmadmin",MODE="660"


註意ENV{MINOR} 的值相差16,增加的值也得相差16 要不然識別不出來

然後執行 start_udev

ll /dev/raw

rac2上要是沒有 執行 partprobe


12.grid 預檢


集群預檢查 grid用戶:

./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose

亂碼請設置字符編碼:

export LC_CTYPE=en_US.UTF-8


13.安裝grid

./runInstall 按照操作安裝即可

##註意1:安裝實現root.sh 的腳本的時候 需要在途中執行,CRS-4124: Oracle High Availability Services startup failed.

CRS-4000: Command Start failed, or completed with errors.

ohasd failed to start: 對設備不適當的 ioctl 操作

ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.

解決方法竟然是出現pa user cert的時候在另一個窗口不停的執行下面的命令,直到命令執行成功,真是變態啊。

##執行root.sh 的時候 出現 adding demo to inttab的時

候 執行,要不然可能需要重現安裝系統來安裝rac

/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1

註意2:安裝最後報一個dns解析錯誤,忽略即可。[INS-20802]錯誤,忽略即可


14. 創建asm磁盤

su - grid

設置字符集:

export LC_CTYPE=en_US.UTF-8

運行 asmca

按照提示創建即可


15.安裝database

切換到oracle用戶,按照提示安裝借口


16.rac維護

1.查看服務狀態


忽略gsd問題


[[email protected] ~]# su - grid

[[email protected] ~]$ crs_stat -t


檢查集群實例運行狀態

[[email protected] ~]$ srvctl status database -d orcl

Instance orcl1 is running on node rac1

Instance orcl2 is running on node rac2



檢查本地節點的CRS狀態


[[email protected] ~]$ crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

檢查集群的CRS狀態


[[email protected] ~]$ crsctl check cluster

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

2.查看集群中節點配置信息


[[email protected] ~]$ olsnodes

rac1

rac2


[[email protected] ~]$ olsnodes -n

rac1 1

rac2 2


[[email protected] ~]$ olsnodes -n -i -s -t

rac1 1 rac1-vip Active Unpinned

rac2 2 rac2-vip Active Unpinned

3.查看集群件的表決磁盤信息


[[email protected] ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 496abcfc4e214fc9bf85cf755e0cc8e2 (/dev/raw/raw1) [OCR]

Located 1 voting disk(s).

4.查看集群SCAN VIP信息


[[email protected] ~]$ srvctl config scan

SCAN name: scan-ip, Network: 1/192.168.248.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /scan-ip/192.168.248.110

查看集群SCAN Listener信息


[[email protected] ~]$ srvctl config scan_listener

SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521


5.asm 增加磁盤:

方式1. grid 用戶下:

sqlplus / as sysasm


alter diskgroup FRA add disk ‘/dev/raw/raw4‘ rebalance power 5;

方式2. 運行asmca,直接添加磁盤


6. 集群關閉和開啟:


一 關閉rac


1.關閉數據實例

[[email protected] ~]$ srvctl status database -d RACDB

實例 RACDB1 正在節點 rac1 上運行

實例 RACDB2 正在節點 rac2 上運行

[[email protected] ~]$ ps -ef|grep smon

oracle 3676 1 0 06:05 ? 00:00:02 ora_smon_RACDB1

grid 12840 1 0 01:54 ? 00:00:00 asm_smon_+ASM1

grid 27890 27621 0 07:52 pts/3 00:00:00 grep smon

2.將數據庫關閉並再次確認

[[email protected] ~]$ srvctl stop database -d RACDB

[[email protected] ~]$ srvctl status database -d RACDB

3.切換到root用戶,source grid用戶的環境變量

[[email protected] ~]# cd /home/grid

[[email protected] grid]# sh .bash_profile

4,使用crs_stat 確認集群各項資源和服務運行狀態

[[email protected] bin]# /u01/app/11.2.0/grid/bin/crs_stat -t -v

5,使用crsctl 指令關閉集群

[[email protected] bin]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all

6,使用crs_stat 確認集群各項資源和服務運行狀態

[[email protected] bin]# /u01/app/11.2.0/grid/bin/crs_stat -t -v

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crs_stat -t -v

CRS-0184: Cannot communicate with the CRS daemon.

說明順利關閉


二 。RAC 開啟

1,root 執行grid 下面的環境變量 (可以不執行直接到/u01/app/11.2.0/grid/bin/模式下)

2,查看crs 集群狀態

[[email protected] bin]# /u01/app/11.2.0/grid/bin/crs_stat -t -v

3,開啟集群

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all

檢查狀態

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crs_stat -t -v

4,使用srvctl 確認數據庫實例狀態

[[email protected] ~]# /u01/app/11.2.0/grid/bin/srvctl status database -d RACDB

實例 RACDB1 沒有在 rac1 節點上運行

實例 RACDB2 沒有在 rac2 節點上運行

5,打開RAC 實例集群

[[email protected] ~]# /u01/app/11.2.0/grid/bin/srvctl start database -d RACDB

確認狀態

[[email protected] ~]# /u01/app/11.2.0/grid/bin/srvctl status database -d RACDB

實例 RACDB1 正在節點 rac1 上運行

實例 RACDB2 正在節點 rac2 上運行

6,打開OEM

[[email protected] ~]# /u01/app/11.2.0/grid/bin/emctl start RACDB

參考資料:

http://www.linuxidc.com/Linux/2017-03/141543.htm





17.遇到的問題及處理辦法


[[email protected] ~]# sh /tmp/CVU_11.2.0.4.0_grid/runfixup.sh

Response file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.response

Enable file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.enable

Log file location: /tmp/CVU_11.2.0.4.0_grid/orarun.log

Installing Package /tmp/CVU_11.2.0.4.0_grid//cvuqdisk-1.0.9-1.rpm

Preparing... ########################################### [100%]

ls: 無法訪問/usr/sbin/smartctl: 沒有那個文件或目錄

/usr/sbin/smartctl not found.

error: %pre(cvuqdisk-1.0.9-1.x86_64) scriptlet failed, exit status 1

error: install: %pre scriptlet failed (2), skipping cvuqdisk-1.0.9-1



yum install smartmontools


Creating trace directory

/u01/app/11.2.0/grid/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory

Failed to create keys in the OLR, rc = 127, 32512

OLR configuration failed



解決:

yum install compat-libcap1 -y


Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2017-09-01 18:18:52: Parsing the host name

2017-09-01 18:18:52: Checking for super user privileges

2017-09-01 18:18:52: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Improper Oracle Clusterware configuration found on this host

Deconfigure the existing cluster configuration before starting

to configure a new Clusterware

run ‘/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig‘

to configure existing failed configuration and then rerun root.sh



解決:

刪除之前的註冊信息,重新註冊


/u01/app/11.2.0/grid/crs/install


./roothas.pl -delete -force -verbose


CRS-4124: Oracle High Availability Services startup failed.

CRS-4000: Command Start failed, or completed with errors.

ohasd failed to start: 對設備不適當的 ioctl 操作

ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.


解決方法竟然是出現pa user cert的時候在另一個窗口不停的執行下面的命令,直到命令執行成功,真是變態啊。

##執行root.sh 的時候 出現 adding demo to inttab的時

候 執行,要不然可能需要重現安裝系統來安裝rac

/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1



錯誤:

ORA-12545: Connect failed because target host or object does not exist

解決:

經測試,其實只需要在客戶端主機的/etc/hosts文件中,添加目標數據庫的vip的解決信息便可以解決(測試數據庫版本11G R2)


RAC維護


在工作上有時候忘記ASM磁盤組所對應的ASM磁盤和設備名,需要查看時可以在ASM實例中使用以下命令:


SQL> select name,path from v$asm_disk_stat;


本文出自 “nginx安裝優化” 博客,請務必保留此出處http://mrdeng.blog.51cto.com/3736360/1970582

oracle 11g r2 rac 安裝整理 附詳細步驟(親測VMware和exsi都可以完美安裝物理機自然沒有問題)