1. 程式人生 > >oracle 11g rac 筆記(VMware 和esxi主機都可以使用)

oracle 11g rac 筆記(VMware 和esxi主機都可以使用)

oracle 11g rac

這個只是筆記,防止丟失,沒事見整理

在vmware安裝目錄 創建磁盤:

vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\ocr.vmdk

vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\ocr2.vmdk

vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\votingdisk.vmdk

vmware-vdiskmanager.exe -c -s 20000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\data2.vmdk

vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 D:\VMWARE\racsharedisk\backup.vmdk


編輯rac1 rac2 的vmx文件 添加:

scsi1.present = "TRUE"

scsi1.virtualDev = "lsilogic"

scsi1.sharedBus = "virtual"


scsi2.present = "TRUE"

scsi2.virtualDev = "lsilogic1"

scsi2.sharedBus = "virtual"


scsi1:1.present = "TRUE"

scsi1:1.mode = "independent-persistent"

scsi1:1.filename = "D:\VMWARE\racsharedisk\ocr.vmdk"

scsi1:1.deviceType = "plainDisk"


scsi1:2.present = "TRUE"

scsi1:2.mode = "independent-persistent"

scsi1:2.filename = "D:\VMWARE\racsharedisk\votingdisk.vmdk"

scsi1:2.deviceType = "plainDisk"


scsi1:3.present = "TRUE"

scsi1:3.mode = "independent-persistent"

scsi1:3.filename = "D:\VMWARE\racsharedisk\data.vmdk"

scsi1:3.deviceType = "plainDisk"


scsi1:4.present = "TRUE"

scsi1:4.mode = "independent-persistent"

scsi1:4.filename = "D:\VMWARE\racsharedisk\backup.vmdk"

scsi1:4.deviceType = "plainDisk"


scsi1:5.present = "TRUE"

scsi1:5.mode = "independent-persistent"

scsi1:5.filename = "D:\VMWARE\racsharedisk\ocr2.vmdk"

scsi1:5.deviceType = "plainDisk"


scsi2:2.present = "TRUE"

scsi2:2.mode = "independent-persistent"

scsi2:2.filename = "D:\VMWARE\racsharedisk\data3.vmdk"

scsi2:2.deviceType = "plainDisk"


disk.locking = "false"

diskLib.dataCacheMaxSize = "0"

diskLib.dataCacheMaxReadAheadSize = "0"

diskLib.DataCacheMinReadAheadSize = "0"

diskLib.dataCachePageSize = "4096"

diskLib.maxUnsyncedWrites = "0"





創建用戶和組:

/usr/sbin/groupadd -g 1000 oinstall

/usr/sbin/groupadd -g 1020 asmadmin

/usr/sbin/groupadd -g 1021 asmdba

/usr/sbin/groupadd -g 1022 asmoper

/usr/sbin/groupadd -g 1031 dba

/usr/sbin/groupadd -g 1032 oper

useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle

mkdir -p /u01/app/11.2.0/grid

mkdir -p /u01/app/grid

mkdir /u01/app/oracle

chown -R grid:oinstall /u01

chown oracle:oinstall /u01/app/oracle

chmod -R 775 /u01/




內核參數調整:

內核參數設置:

[[email protected] ~]# vi /etc/sysctl.conf

kernel.msgmnb = 65536

kernel.msgmax = 65536

kernel.shmmax = 68719476736

kernel.shmall = 4294967296

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 68719476736

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

net.ipv4.tcp_wmem = 262144 262144 262144

net.ipv4.tcp_rmem = 4194304 4194304 4194304



配置oracle、grid用戶的shell限制

[[email protected] ~]# vi /etc/security/limits.conf

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536



配置login

[[email protected] ~]# vi /etc/pam.d/login

session required pam_limits.so



依賴軟件:


安裝需要的軟件包

binutils-2.20.51.0.2-5.11.el6 (x86_64)

compat-libcap1-1.10-1 (x86_64)

compat-libstdc++-33-3.2.3-69.el6 (x86_64)

compat-libstdc++-33-3.2.3-69.el6.i686

gcc-4.4.4-13.el6 (x86_64)

gcc-c++-4.4.4-13.el6 (x86_64)

glibc-2.12-1.7.el6 (i686)

glibc-2.12-1.7.el6 (x86_64)

glibc-devel-2.12-1.7.el6 (x86_64)

glibc-devel-2.12-1.7.el6.i686

ksh

libgcc-4.4.4-13.el6 (i686)

libgcc-4.4.4-13.el6 (x86_64)

libstdc++-4.4.4-13.el6 (x86_64)

libstdc++-4.4.4-13.el6.i686

libstdc++-devel-4.4.4-13.el6 (x86_64)

libstdc++-devel-4.4.4-13.el6.i686

libaio-0.3.107-10.el6 (x86_64)

libaio-0.3.107-10.el6.i686

libaio-devel-0.3.107-10.el6 (x86_64)

libaio-devel-0.3.107-10.el6.i686

make-3.81-19.el6

sysstat-9.0.4-11.el6 (x86_64)

安裝:

yum install gcc gcc-c++ libaio* glibc* glibc-devel* ksh libgcc* libstdc++* libstdc++-devel* make sysstat unixODBC* compat-libstdc++-33.x86_64 elfutils-libelf-devel glibc.i686 compat-libcap1 smartmontools unzip openssh* parted cvuqdisk -y


rpm -ivh --force --nodeps libaio-0.3.106-5.i386.rpm

rpm -ivh --force --nodeps libaio-devel-0.3.106-5.i386.rpm

rpm -ivh --force --nodeps pdksh-5.2.14-36.el5.i386.rpm

rpm -ivh --force --nodeps libstdc++-4.1.2-48.el5.i386.rpm

rpm -ivh --force --nodeps libgcc-4.1.2-48.el5.i386.rpm

rpm -ivh --force --nodeps unixODBC-devel-2.2.11-7.1.i386.rpm

rpm -ivh --force --nodeps compat-libstdc++-33-3.2.3-61.i386.rpm

rpm -ivh --force --nodeps unixODBC-2.2.11-7.1.i386.rpm


配置hosts文件:

192.168.20.220 rac1

192.168.166.220 rac1-priv

192.168.20.223 rac1-vip

192.168.20.221 rac2

192.168.166.221 rac2-priv

192.168.20.224 rac2-vip

192.168.20.222 scan-ip





Oracle_sid需要根據節點不同進行修改

[[email protected] ~]# su - grid

[[email protected] ~]$ vi .bash_profile


export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

umask 022




##rac1:

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

umask 022



##rac2:


export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=+ASM2

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/11.2.0/grid

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

umask 022



需要註意的是ORACLE_UNQNAME是數據庫名,創建數據庫時指定多個節點是會創建多個實例,ORACLE_SID指的是數據庫實例名



[[email protected] ~]# su - oracle

[[email protected] ~]$ vi .bash_profile



#rac1:

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=orcl1

export ORACLE_UNQNAME=orcl

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export TNS_ADMIN=$ORACLE_HOME/network/admin

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib



##rac2:

export TMP=/tmp

export TMPDIR=$TMP

export ORACLE_SID=orcl2

export ORACLE_UNQNAME=orcl

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export TNS_ADMIN=$ORACLE_HOME/network/admin

export PATH=/usr/sbin:$PATH

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib




$ source .bash_profile使配置文件生效





配置 繞城rac1 rac2 中的 root grid oracle 直接相互無ssh訪問

每一個用戶目錄下都要配置:


ssh-keygen -t rsa

ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys

ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys

scp authorized_keys rac2:~/.ssh/


ssh rac1 date

ssh rac2 date

ssh rac1-priv date

ssh rac2-priv date



##如果swap太小,swap調整方法:


通過此種方式進行swap 的擴展,首先要計算出block的數目。具體為根據需要擴展的swapfile的大小,以M為單位。block=swap分區大小*1024, 例如,需要擴展64M的swapfile,則:block=64*1024=65536.


然後做如下步驟:


dd if=/dev/zero of=/swapfile bs=1024 count=9216000

#mkswap /swapfile

#swapon /swapfile

#vi /etc/fstab

增加/swapf swap swap defaults 0 0

# cat /proc/swaps 或者# free –m //查看swap分區大小

# swapoff /swapf //關閉擴展的swap分區




共享磁盤配置:


rac1 和rac2配置共享磁盤 (esxi主機上面 2個總線都要選擇共享才不會報錯):

rac2 需要重啟才可以在 ll /dev/raw 下面看到磁盤

所以只需root.sh 的時候 如果報錯找不到raw磁盤,需要重啟rac2,所以在配置好共享磁盤之後,重啟rac2

在rac1格式化之後,

cat /etc/udev/rules.d/60-raw.rules

# Enter raw device bindings here.

#

# An example would be:

# ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"

# to bind /dev/raw/raw1 to /dev/sda, or

# ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"

# to bind /dev/raw/raw2 to the device with major 8, minor 1.

ACTION=="add",KERNEL=="/dev/sdb1",RUN+=‘/bin/raw /dev/raw/raw1 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"

ACTION=="add",KERNEL=="/dev/sdc1",RUN+=‘/bin/raw /dev/raw/raw2 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"

ACTION=="add",KERNEL=="/dev/sdd1",RUN+=‘/bin/raw /dev/raw/raw3 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"

ACTION=="add",KERNEL=="/dev/sde1",RUN+=‘/bin/raw /dev/raw/raw4 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"

ACTION=="add",KERNEL=="/dev/sdf1",RUN+=‘/bin/raw /dev/raw/raw5 %N"

ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="81",RUN+="/bin/raw /dev/raw/raw5 %M %m"


KERNEL=="raw[1-5]",OWNER="grid",GROUP="asmadmin",MODE="660"



註意ENV{MINOR} 的值相差16,增加的值也得相差16 要不然識別不出來

然後執行 start_udev

ll /dev/raw



rac2上要是沒有 執行 partprobe


[[email protected] ~]# sh /tmp/CVU_11.2.0.4.0_grid/runfixup.sh

Response file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.response

Enable file being used is :/tmp/CVU_11.2.0.4.0_grid/fixup.enable

Log file location: /tmp/CVU_11.2.0.4.0_grid/orarun.log

Installing Package /tmp/CVU_11.2.0.4.0_grid//cvuqdisk-1.0.9-1.rpm

Preparing... ########################################### [100%]

ls: 無法訪問/usr/sbin/smartctl: 沒有那個文件或目錄

/usr/sbin/smartctl not found.

error: %pre(cvuqdisk-1.0.9-1.x86_64) scriptlet failed, exit status 1

error: install: %pre scriptlet failed (2), skipping cvuqdisk-1.0.9-1



yum install smartmontools



集群預檢查 grid用戶:

./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose

亂碼:

export LC_CTYPE=en_US.UTF-8




uid=1100(grid) gid=1000(oinstall) 組=1000(oinstall),1020(asmadmin),1021(asmdba),1022(asmoper),1031(dba),1032(oper)

uid=1100(grid) gid=1000(oinstall) 組=1000(oinstall),1020(asmadmin),1021(asmdba),1022(asmoper),1031(dba),1032(oper)



Creating trace directory

/u01/app/11.2.0/grid/bin/clscfg.bin: error while loading shared libraries: libcap.so.1: cannot open shared object file: No such file or directory

Failed to create keys in the OLR, rc = 127, 32512

OLR configuration failed



解決:

yum install compat-libcap1 -y



Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2017-09-01 18:18:52: Parsing the host name

2017-09-01 18:18:52: Checking for super user privileges

2017-09-01 18:18:52: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Improper Oracle Clusterware configuration found on this host

Deconfigure the existing cluster configuration before starting

to configure a new Clusterware

run ‘/u01/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig‘

to configure existing failed configuration and then rerun root.sh




解決:

刪除之前的註冊信息,重新註冊


/u01/app/11.2.0/grid/crs/install


./roothas.pl -delete -force -verbose


CRS-4124: Oracle High Availability Services startup failed.

CRS-4000: Command Start failed, or completed with errors.

ohasd failed to start: 對設備不適當的 ioctl 操作

ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.




解決方法竟然是出現pa user cert的時候在另一個窗口不停的執行下面的命令,直到命令執行成功,真是變態啊。


##執行root.sh 的時候 出現 adding demo to inttab的時候 執行,要不然可能需要重現安裝系統來安裝rac

/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/null bs=1024 count=1





asmsnmp 密碼 DENG19891220

oracle 密碼 deng19891220





錯誤:

ORA-12545: Connect failed because target host or object does not exist

解決:

經測試,其實只需要在客戶端主機的/etc/hosts文件中,添加目標數據庫的vip的解決信息便可以解決(測試數據庫版本11G R2)




RAC維護




在工作上有時候忘記ASM磁盤組所對應的ASM磁盤和設備名,需要查看時可以在ASM實例中使用以下命令:


SQL> select name,path from v$asm_disk_stat;


1.查看服務狀態


忽略gsd問題


[[email protected] ~]# su - grid

[[email protected] ~]$ crs_stat -t


檢查集群運行狀態

[[email protected] ~]$ srvctl status database -d orcl

Instance orcl1 is running on node rac1

Instance orcl2 is running on node rac2



檢查本地節點的CRS狀態


[[email protected] ~]$ crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

檢查集群的CRS狀態


[[email protected] ~]$ crsctl check cluster

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

3.查看集群中節點配置信息


[[email protected] ~]$ olsnodes

rac1

rac2


[[email protected] ~]$ olsnodes -n

rac1 1

rac2 2


[[email protected] ~]$ olsnodes -n -i -s -t

rac1 1 rac1-vip Active Unpinned

rac2 2 rac2-vip Active Unpinned

4.查看集群件的表決磁盤信息


[[email protected] ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE 496abcfc4e214fc9bf85cf755e0cc8e2 (/dev/raw/raw1) [OCR]

Located 1 voting disk(s).

5.查看集群SCAN VIP信息


[[email protected] ~]$ srvctl config scan

SCAN name: scan-ip, Network: 1/192.168.248.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /scan-ip/192.168.248.110

查看集群SCAN Listener信息


[[email protected] ~]$ srvctl config scan_listener

SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521

6.啟、停集群數據庫


整個集群的數據庫啟停

進入grid用戶

[[email protected] ~]$ srvctl stop database -d orcl

[[email protected] ~]$ srvctl start database -d orcl


關閉所有節點

進入root用戶

關閉所有節點

[[email protected] bin]# pwd

/u01/app/11.2.0/grid/bin

[[email protected] bin]# ./crsctl stop crs

實際只關閉了當前結點


EM管理


oracle用戶下執行


[[email protected] ~]$ emctl status dbconsole

[[email protected] ~]$ emctl start dbconsole

[[email protected] ~]$ emctl stop dbconsole

本地sqlplus連接


windows中安裝oracle客戶端版

修改tsnames.ora

D:\develop\app\orcl\product\11.2.0\client_1\network\admin\tsnames.ora

添加


RAC_ORCL =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.248.110)(PORT = 1521))

)

(CONNECT_DATA =

(SERVER = DEDICATED)

(SERVICE_NAME = orcl)

)

)

這裏的HOST寫的是scan-ip


C:\Users\sxtcx>sqlplus sys/[email protected]_ORCL as sysdba


SQL*Plus: Release 11.2.0.1.0 Production on 星期四 4月 14 14:37:30 2016


Copyright (c) 1982, 2010, Oracle. All rights reserved.



連接到:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options


SQL> select instance_name, status from v$instance;


INSTANCE_NAME STATUS

-------------------------------- ------------------------

orcl1 OPEN

當開啟第二個命令行窗口連接時,發現實例名為orcl2,可以看出,scan-ip的加入可以具有負載均衡的作用。






oracle環境變量引起的“connected to an idle instance”



解決查看oracle的環境變量


將ORACLE_SID 和 ORACLE_UNQNAME 都改成自己建立的實例(rac1 rac2)都要改


source 一下 就可以了




asm 增加磁盤:

grid 用戶下:

sqlplus / as sysasm


alter diskgroup FRA add disk ‘/dev/raw/raw4‘ rebalance power 5;


集群關閉和開啟:

oracle 11g RAC 啟動和關閉和一些維護命令 2013-10-27 16:47:33

分類: Linux

啟動和關閉是經常使用的操作,必須牢記,以下整理的文檔:

集群名稱 rac-cluster

集群數據庫 RACDB

一 關閉rac

1,確認srvctl 和ps -ef|grep smon

[[email protected] ~]$ srvctl status database -d RACDB

實例 RACDB1 正在節點 rac1 上運行

實例 RACDB2 正在節點 rac2 上運行

[[email protected] ~]$ ps -ef|grep smon

oracle 3676 1 0 06:05 ? 00:00:02 ora_smon_RACDB1

grid 12840 1 0 01:54 ? 00:00:00 asm_smon_+ASM1

grid 27890 27621 0 07:52 pts/3 00:00:00 grep smon

2,將數據庫關閉並再次確認

[[email protected] ~]$ srvctl stop database -d RACDB

[[email protected] ~]$ ps -ef|grep smon

3,使用root 帳號關閉ASM

[[email protected] ~]$ su -

口令:

[[email protected]c1 ~]# cd /home/grid

[[email protected] grid]# sh .bash_profile

4,使用crs_stat 確認集群各項資源和服務運行狀態

[[email protected] bin]# /u01/app/11.2.0/grid/bin/crs_stat -t -v

5,使用crsctl 指令關閉集群

[[email protected] bin]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all

6,使用crs_stat 確認集群各項資源和服務運行狀態

[[email protected] bin]# /u01/app/11.2.0/grid/bin/crs_stat -t -v

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crs_stat -t -v

CRS-0184: Cannot communicate with the CRS daemon.

說明順利關閉


二 。RAC 開啟

1,root 執行grid 下面的環境變量 (可以不執行直接到/u01/app/11.2.0/grid/bin/模式下)

2,[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all

3,開啟集群

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all

檢查狀態

[[email protected] ~]# /u01/app/11.2.0/grid/bin/crs_stat -t -v

4,使用srvcel 確認數據庫狀態

[[email protected] ~]# /u01/app/11.2.0/grid/bin/srvctl status database -d RACDB

實例 RACDB1 沒有在 rac1 節點上運行

實例 RACDB2 沒有在 rac2 節點上運行

5,打開RAC

[[email protected] ~]# /u01/app/11.2.0/grid/bin/srvctl start database -d RACDB

確認狀態

[[email protected] ~]# /u01/app/11.2.0/grid/bin/srvctl status database -d RACDB

實例 RACDB1 正在節點 rac1 上運行

實例 RACDB2 正在節點 rac2 上運行

6,打開OEM

[[email protected] ~]# /u01/app/11.2.0/grid/bin/emctl start RACDB

參考資料:

http://www.linuxidc.com/Linux/2017-03/141543.htm


http://blog.csdn.net/seulk_q/article/details/42612393

http://www.cnblogs.com/abclife/p/5706807.html

http://blog.chinaunix.net/uid-22948773-id-3940857.html

http://gaoshan.blog.51cto.com/742525/317349/



參考資料:

http://www.linuxidc.com/Linux/2017-03/141543.htm



kmod-oracleasm-2.0.6.rh1-3.el6_5.x86_64)
















本文出自 “nginx安裝優化” 博客,轉載請與作者聯系!

oracle 11g rac 筆記(VMware 和esxi主機都可以使用)