1. 程式人生 > >corosync+pacemaker+crmsh的高可用web集群的實現

corosync+pacemaker+crmsh的高可用web集群的實現

corosync pacemaker crmsh httpd

網絡規劃:

node1:eth0:172.16.31.10/16

node2: eth0: 172.16.31.11/16

nfs: eth0: 172.16.31.12/15

註:

nfs在提供NFS服務的同時是一臺NTP服務器,可以讓node1和node2同步時間的。

node1和node2之間心跳信息傳遞依靠eth0傳遞

web服務器的VIP是172.16.31.166/16


架構圖:跟前文的架構一樣,只是節點上安裝的高可用軟件不一致:

技術分享


一.高可用集群構建的前提條件

1.主機名互相解析,實現主機名通信

[[email protected] ~]# vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.31.10 node1.stu31.com node1

172.16.31.11 node2.stu31.com node2


復制一份到node2:

[[email protected] ~]# scp /etc/hosts [email protected]:/etc/hosts


2.節點直接實現ssh無密鑰通信

節點1:

[[email protected] ~]# ssh-keygen -t rsa -P ""

[[email protected] ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]

節點2:

[[email protected] ~]# ssh-keygen -t rsa -P ""

[[email protected] ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]


測試:

[[email protected] ~]# date ; ssh node1 ‘date‘

Fri Jan 2 05:46:54 CST 2015

Fri Jan 2 05:46:54 CST 2015

時間同步成功!註意時間必須一致!

ntp服務器構建參考:http://sohudrgon.blog.51cto.com/3088108/1598314


二.集群軟件安裝及配置

1.安裝corosync和pacemaker軟件包:節點1和節點2都安裝

# yum install corosync pacemaker -y


2.創建配置文件並配置

[[email protected] ~]# cd /etc/corosync/

[[email protected] corosync]# cp corosync.conf.example corosync.conf

[[email protected] corosync]# cat corosync.conf

# Please read the corosync.conf.5 manual page

compatibility: whitetank

totem {

version: 2

# secauth: Enable mutual node authentication. If you choose to

# enable this ("on"), then do remember to create a shared

# secret with "corosync-keygen".

#開啟認證

secauth: on

threads: 0

# interface: define at least one interface to communicate

# over. If you define more than one interface stanza, you must

# also set rrp_mode.

interface {

# Rings must be consecutively numbered, starting at 0.

ringnumber: 0

# This is normally the *network* address of the

# interface to bind to. This ensures that you can use

# identical instances of this configuration file

# across all your cluster nodes, without having to

# modify this option.

#定義網絡地址

bindnetaddr: 172.16.31.0

# However, if you have multiple physical network

# interfaces configured for the same subnet, then the

# network address alone is not sufficient to identify

# the interface Corosync should bind to. In that case,

# configure the *host* address of the interface

# instead:

# bindnetaddr: 192.168.1.1

# When selecting a multicast address, consider RFC

# 2365 (which, among other things, specifies that

# 239.255.x.x addresses are left to the discretion of

# the network administrator). Do not reuse multicast

# addresses across multiple Corosync clusters sharing

# the same network.

#定義組播地址

mcastaddr: 239.31.131.12

# Corosync uses the port you specify here for UDP

# messaging, and also the immediately preceding

# port. Thus if you set this to 5405, Corosync sends

# messages over UDP ports 5405 and 5404.

#信息傳遞端口

mcastport: 5405

# Time-to-live for cluster communication packets. The

# number of hops (routers) that this ring will allow

# itself to pass. Note that multicast routing must be

# specifically enabled on most network routers.

ttl: 1

}

}

logging {

# Log the source file and line where messages are being

# generated. When in doubt, leave off. Potentially useful for

# debugging.

fileline: off

# Log to standard error. When in doubt, set to no. Useful when

# running in the foreground (when invoking "corosync -f")

to_stderr: no

# Log to a log file. When set to "no", the "logfile" option

# must not be set.

#定義日誌記錄存放

to_logfile: yes

logfile: /var/log/cluster/corosync.log

# Log to the system log daemon. When in doubt, set to yes.

#to_syslog: yes

# Log debug messages (very verbose). When in doubt, leave off.

debug: off

# Log messages with time stamps. When in doubt, set to on

# (unless you are only logging to syslog, where double

# timestamps can be annoying).

timestamp: on

logger_subsys {

subsys: AMF

debug: off

}

}

#以插件方式啟動pacemaker:

service {

ver: 0

name: pacemaker

}



3.生成認證密鑰文件:認證密鑰文件需要1024字節,我們可以下載程序包來實現寫滿內存的熵池實現,

[[email protected] corosync]# corosync-keygen

Corosync Cluster Engine Authentication key generator.

Gathering 1024 bits for key from /dev/random.

Press keys on your keyboard to generate entropy.

Press keys on your keyboard to generate entropy (bits = 152).

Press keys on your keyboard to generate entropy (bits = 216).

Press keys on your keyboard to generate entropy (bits = 280).

Press keys on your keyboard to generate entropy (bits = 344).

Press keys on your keyboard to generate entropy (bits = 408).

Press keys on your keyboard to generate entropy (bits = 472).

Press keys on your keyboard to generate entropy (bits = 536).

Press keys on your keyboard to generate entropy (bits = 600).

Press keys on your keyboard to generate entropy (bits = 664).

Press keys on your keyboard to generate entropy (bits = 728).

Press keys on your keyboard to generate entropy (bits = 792).

Press keys on your keyboard to generate entropy (bits = 856).

Press keys on your keyboard to generate entropy (bits = 920).

Press keys on your keyboard to generate entropy (bits = 984).

Writing corosync key to /etc/corosync/authkey.


完成後將配置文件及認證密鑰復制一份到節點2:

[[email protected] corosync]# scp -p authkey corosync.conf node2:/etc/corosync/

authkey 100% 128 0.1KB/s 00:00

corosync.conf 100% 2703 2.6KB/s 00:00


4.啟動corosync服務:

[[email protected] corosync]# cd

[[email protected] ~]# service corosync start

Starting Corosync Cluster Engine (corosync): [ OK ]

[[email protected] ~]# service corosync start

Starting Corosync Cluster Engine (corosync): [ OK ]



5.查看日誌:

查看corosync引擎是否正常啟動:

節點1的啟動日誌:

[[email protected] ~]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log

Jan 02 08:28:13 corosync [MAIN ] Corosync Cluster Engine (‘1.4.7‘): started and ready to provide service.

Jan 02 08:28:13 corosync [MAIN ] Successfully read main configuration file ‘/etc/corosync/corosync.conf‘.

Jan 02 08:32:48 corosync [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:2055.

Jan 02 08:38:42 corosync [MAIN ] Corosync Cluster Engine (‘1.4.7‘): started and ready to provide service.

Jan 02 08:38:42 corosync [MAIN ] Successfully read main configuration file ‘/etc/corosync/corosync.conf‘.


節點2的啟動日誌:

[[email protected] ~]# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log

Jan 02 08:38:56 corosync [MAIN ] Corosync Cluster Engine (‘1.4.7‘): started and ready to provide service.

Jan 02 08:38:56 corosync [MAIN ] Successfully read main configuration file ‘/etc/corosync/corosync.conf‘.


查看關鍵字TOTEM,初始化成員節點通知是否發出:

[[email protected] ~]# grep "TOTEM" /var/log/cluster/corosync.log

Jan 02 08:28:13 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).

Jan 02 08:28:13 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

Jan 02 08:28:14 corosync [TOTEM ] The network interface [172.16.31.11] is now up.

Jan 02 08:28:14 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.

Jan 02 08:38:42 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).

Jan 02 08:38:42 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).

Jan 02 08:38:42 corosync [TOTEM ] The network interface [172.16.31.10] is now up.

Jan 02 08:38:42 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.

Jan 02 08:38:51 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.



使用crm_mon命令查看節點在線數量:

[[email protected] ~]# crm_mon

Last updated: Fri Jan 2 08:42:23 2015

Last change: Fri Jan 2 08:38:52 2015

Stack: classic openais (with plugin)

Current DC: node1.stu31.com - partition with quorum

Version: 1.1.11-97629de

2 Nodes configured, 2 expected votes

0 Resources configured

Online: [ node1.stu31.com node2.stu31.com ]



查看監聽端口5405是否開啟:

[[email protected] ~]# ss -tunl |grep 5405

udp UNCONN 0 0 172.16.31.10:5405 *:*

udp UNCONN 0 0 239.31.131.12:5405 *:*


查看錯誤日誌:

[[email protected] ~]# grep ERROR /var/log/cluster/corosync.log

#警告信息:將pacemaker以插件運行的告警,忽略即可

Jan 02 08:28:14 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon.

Jan 02 08:28:14 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of ‘Clusters from Scratch‘ (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN

Jan 02 08:28:37 [29004] node1.stu31.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.

Jan 02 08:28:37 [29004] node1.stu31.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.

Jan 02 08:32:47 [29004] node1.stu31.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.

Jan 02 08:38:42 corosync [pcmk ] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync. The plugin is not supported in this environment and will be removed very soon.

Jan 02 08:38:42 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of ‘Clusters from Scratch‘ (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN

Jan 02 08:39:05 [29300] node1.stu31.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.

Jan 02 08:39:05 [29300] node1.stu31.com pengine: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues.


[[email protected] ~]# crm_verify -L -V

#無stonith設備,可以忽略

error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined

error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option

error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity

Errors found during check: config not valid



三.集群配置工具安裝:crmsh軟件安裝


1.配置yum源:我這裏存在一個完整的yum源服務器

[[email protected] yum.repos.d]# vim centos6.6.repo

[base]

name=CentOS $releasever $basearch on local server 172.16.0.1

baseurl=http://172.16.0.1/cobbler/ks_mirror/CentOS-6.6-$basearch/

gpgcheck=0

[extra]

name=CentOS $releasever $basearch extras

baseurl=http://172.16.0.1/centos/$releasever/extras/$basearch/

gpgcheck=0

[epel]

name=Fedora EPEL for CentOS$releasever $basearch on local server 172.16.0.1

baseurl=http://172.16.0.1/fedora-epel/$releasever/$basearch/

gpgcheck=0

[corosync2]

name=corosync2

baseurl=ftp://172.16.0.1/pub/Sources/6.x86_64/corosync/

gpgcheck=0


復制一份到節點2:

[[email protected] yum.repos.d]# scp centos6.6.repo node2:/etc/yum.repos.d/

centos6.6.repo 100% 522 0.5KB/s 00:00


2.安裝crmsh軟件,2各節點都安裝

[[email protected] ~]# yum install -y crmsh

[[email protected] ~]# yum install -y crmsh


3.去除上面的stonith設備警告錯誤:

[[email protected] ~]# crm

crm(live)# configure

crm(live)configure# property stonith-enabled=false

crm(live)configure# verify

#單節點需要仲裁,或者忽略(會造成集群分裂)

crm(live)configure# property no-quorum-policy=ignore

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# show

node node1.stu31.com

node node2.stu31.com

property cib-bootstrap-options: \

dc-version=1.1.11-97629de \

cluster-infrastructure="classic openais (with plugin)" \

expected-quorum-votes=2 \

stonith-enabled=false \

no-quorum-policy=ignore


無錯誤信息輸出了:

[[email protected] ~]# crm_verify -L -V

[[email protected] ~]#



四.實現使用corosync+pacemaker+crmsh來構建一個高可用性的web集群:


1.httpd服務的完整性測試

測試頁構建:

[[email protected] ~]# echo "node1.stu31.com" > /var/www/html/index.html

[[email protected] ~]# echo "node2.stu31.com" > /var/www/html/index.html


啟動httpd服務,完成測試:

node1節點:

[[email protected] ~]# service httpd start

Starting httpd: [ OK ]

[[email protected] ~]# curl http://172.16.31.10

node1.stu31.com

node2節點:

[[email protected] ~]# service httpd start

Starting httpd: [ OK ]

[[email protected] ~]# curl http://172.16.31.11

node2.stu31.com


關閉httpd服務,關閉httpd服務自啟動:

node1設置:

[[email protected] ~]# service httpd stop

Stopping httpd: [ OK ]

[[email protected] ~]# chkconfig httpd off

node2設置:

[[email protected] ~]# service httpd stop

Stopping httpd: [ OK ]

[[email protected] ~]# chkconfig httpd off



2.定義集群VIP地址

[[email protected] ~]# crm

crm(live)# configure

crm(live)configure# primitive webip ocf:heartbeat:IPaddr params ip=‘172.16.31.166‘ nic=‘eth0‘ cidr_netmask=‘16‘ broadcast=‘172.16.31.255‘

crm(live)configure# verify

crm(live)configure# commit


可以查看node1上的ip地址:

[[email protected] ~]# ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

link/ether 08:00:27:16:bc:4a brd ff:ff:ff:ff:ff:ff

inet 172.16.31.10/16 brd 172.16.255.255 scope global eth0

inet 172.16.31.166/16 brd 172.16.31.255 scope global secondary eth0

inet6 fe80::a00:27ff:fe16:bc4a/64 scope link

valid_lft forever preferred_lft forever


切換節點node1為備用節點:

crm(live)configure# cd

crm(live)# node

#將節點1設置為備用節點

crm(live)node# standby

#將備用節點啟動

crm(live)node# online

crm(live)node# cd

#查看各節點狀態信息

crm(live)# status

Last updated: Fri Jan 2 11:11:47 2015

Last change: Fri Jan 2 11:11:38 2015

Stack: classic openais (with plugin)

Current DC: node1.stu31.com - partition with quorum

Version: 1.1.11-97629de

2 Nodes configured, 2 expected votes

1 Resources configured

#可以看出主備節點都啟動了,但是資源是啟動在node2上的

Online: [ node1.stu31.com node2.stu31.com ]

webip (ocf::heartbeat:IPaddr): Started node2.stu31.com


我們需要定義資源監控,需要編輯原來定義的webip資源:

[[email protected] ~]# crm

crm(live)# resource

#查看資源webip的狀態信息

crm(live)resource# status webip

resource webip is running on: node2.stu31.com

#停止webip資源

crm(live)resource# stop webip

crm(live)resource# cd

crm(live)# configure

#刪除資源webip

crm(live)configure# delete webip

#重新定義webip資源,定義資源監控

crm(live)configure# primitive webip IPaddr params ip=172.16.31.166 op monitor interval=10s timeout=20s

#配置校驗

crm(live)configure# verify

#提交資源

crm(live)configure# commit



3.定義httpd服務資源及定義資源的約束配置:

#定義httpd服務資源

crm(live)configure# primitive webserver lsb:httpd op monitor interval=30s timeout=15s

crm(live)configure# verify

#定義協同約束,httpd服務資源跟隨VIP在節點啟動

crm(live)configure# colocation webserver_with_webip inf: webserver webip

crm(live)configure# verify

#定義順序約束,先啟動webip資源,再啟動webserver資源

crm(live)configure# order webip_before_webserver mandatory: webip webserver

crm(live)configure# verify

#定義位置約束,資源對節點的傾向性,更傾向於node1節點。

crm(live)configure# location webip_prefer_node1 webip rule 100: uname eq node1.stu31.com

crm(live)configure# verify

#完成設置後就提交

crm(live)configure# commit

crm(live)configure# cd

#查看集群資源啟動狀態信息

crm(live)# status

Last updated: Fri Jan 2 11:27:16 2015

Last change: Fri Jan 2 11:27:07 2015

Stack: classic openais (with plugin)

Current DC: node1.stu31.com - partition with quorum

Version: 1.1.11-97629de

2 Nodes configured, 2 expected votes

2 Resources configured

Online: [ node1.stu31.com node2.stu31.com ]

webip (ocf::heartbeat:IPaddr): Started node1.stu31.com

webserver (lsb:httpd): Started node1.stu31.com

資源已經啟動了,並且啟動在node1節點上,我們來測試是否成功!


查看node1節點的VIP信息:

[[email protected] ~]# ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

link/ether 08:00:27:16:bc:4a brd ff:ff:ff:ff:ff:ff

inet 172.16.31.10/16 brd 172.16.255.255 scope global eth0

inet 172.16.31.166/16 brd 172.16.255.255 scope global secondary eth0

inet6 fe80::a00:27ff:fe16:bc4a/64 scope link

valid_lft forever preferred_lft forever


查看web服務器的監聽端口是否啟動:

[[email protected] ~]# ss -tunl |grep 80

tcp LISTEN 0 128 :::80 :::*


到其他主機訪問測試:

[[email protected] ~]# curl http://172.16.31.166

node1.stu31.com



我們將node1切換成備用節點:

crm(live)# node standby

crm(live)# status

Last updated: Fri Jan 2 11:30:13 2015

Last change: Fri Jan 2 11:30:11 2015

Stack: classic openais (with plugin)

Current DC: node1.stu31.com - partition with quorum

Version: 1.1.11-97629de

2 Nodes configured, 2 expected votes

2 Resources configured

Node node1.stu31.com: standby

Online: [ node2.stu31.com ]

webip (ocf::heartbeat:IPaddr): Started node2.stu31.com

webserver (lsb:httpd): Started node2.stu31.com

crm(live)#

訪問測試:

[[email protected] ~]# curl http://172.16.31.166

node2.stu31.com


測試成功!



4.下面我們來測試定義資源對當前節點的粘性:

[[email protected] ~]# crm

crm(live)# configure

crm(live)configure# property default-resource-stickiness=100

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# cd

crm(live)# node online

crm(live)# status

Last updated: Fri Jan 2 11:33:07 2015

Last change: Fri Jan 2 11:33:05 2015

Stack: classic openais (with plugin)

Current DC: node1.stu31.com - partition with quorum

Version: 1.1.11-97629de

2 Nodes configured, 2 expected votes

2 Resources configured

Online: [ node1.stu31.com node2.stu31.com ]

webip (ocf::heartbeat:IPaddr): Started node2.stu31.com

webserver (lsb:httpd): Started node2.stu31.com

#上面我們定義位置約束時定義了資源的傾向性是node1,預想情況是我們這邊node1上線後會自動搶占node2成為主節點,但是我們定義了資源對節點的粘性,所以我們的node1上線後未搶占node2,說明資源對節點的粘性是比資源對節點的傾向性更強的約束。



五.定義文件系統資源:

1.前提是存在一個共享的文件系統

配置NFS服務器

[[email protected] ~]# mkdir /www/htdocs -pv

[[email protected] ~]# vim /etc/exports

/www/htdocs 172.16.31.0/16(rw,no_root_squash)


[[email protected] ~]# service nfs start

[[email protected] ~]# showmount -e 172.16.31.12

Export list for 172.16.31.12:

/www/htdocs 172.16.31.0/16

創建一個測試網頁:

[[email protected] ~]# echo "page from nfs filesystem" > /www/htdocs/index.html



2.客戶端掛載nfs文件系統:

[[email protected] ~]# mount -t nfs 172.16.31.12:/www/htdocs /var/www/html/

[[email protected] ~]# ls /var/www/html/

index.html


訪問測試:

[[email protected] ~]# curl http://172.16.31.166

page from nfs filesystem

成功後卸載文件系統:

[[email protected] ~]# umount /var/www/html/


3.我們開始定義filesystem資源:

[[email protected] ~]# crm

crm(live)# configure

#定義文件系統存儲資源

crm(live)configure# primitive webstore ocf:heartbeat:Filesystem params device="172.16.31.12:/www/htdocs" directory="/var/www/html" fstype="nfs" op monitor interva=20s timeout=40s

crm(live)configure# verify

#校驗警告信息,提示我們的start和stop超時時間為設置

WARNING: webstore: default timeout 20s for start is smaller than the advised 60

WARNING: webstore: default timeout 20s for stop is smaller than the advised 60

#刪除資源,重新設置

crm(live)configure# delete webstore

#加入start和stop的超時時長

crm(live)configure# primitive webstore ocf:heartbeat:Filesystem params device="172.16.31.12:/www/htdocs" directory="/var/www/html" fstype="nfs" op monitor interva=20s timeout=40s op start timeout=60s op stop timeout=60s

crm(live)configure# verify

#定義資源組,來定義web這個服務需要的所有資源進一個組內,便於管理

crm(live)configure# group webservice webip webstore webserver

INFO: resource references in location:webip_prefer_node1 updated

INFO: resource references in colocation:webserver_with_webip updated

INFO: resource references in order:webip_before_webserver updated

INFO: resource references in colocation:webserver_with_webip updated

INFO: resource references in order:webip_before_webserver updated

#定義完成後就提交,然後查看資源狀態信息

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

Last updated: Fri Jan 2 11:52:51 2015

Last change: Fri Jan 2 11:52:44 2015

Stack: classic openais (with plugin)

Current DC: node1.stu31.com - partition with quorum

Version: 1.1.11-97629de

2 Nodes configured, 2 expected votes

3 Resources configured

Node node2.stu31.com: standby

Online: [ node1.stu31.com ]

Resource Group: webservice

webip (ocf::heartbeat:IPaddr): Started node1.stu31.com

webstore (ocf::heartbeat:Filesystem): Started node1.stu31.com

webserver (lsb:httpd): Started node1.stu31.com

#最後定義一下資源的啟動順序,先啟動存儲,在啟動httpd服務:

crm(live)configure# order webstore_before_webserver mandatory: webstore webserver

crm(live)configure# verify

crm(live)configure# commit

crm(live)configure# cd

crm(live)# status

Last updated: Fri Jan 2 11:55:00 2015

Last change: Fri Jan 2 11:54:10 2015

Stack: classic openais (with plugin)

Current DC: node1.stu31.com - partition with quorum

Version: 1.1.11-97629de

2 Nodes configured, 2 expected votes

3 Resources configured

Node node2.stu31.com: standby

Online: [ node1.stu31.com ]

Resource Group: webservice

webip (ocf::heartbeat:IPaddr): Started node1.stu31.com

webstore (ocf::heartbeat:Filesystem): Started node1.stu31.com

webserver (lsb:httpd): Started node1.stu31.com

crm(live)# quit

bye


訪問測試:

[[email protected] ~]# curl http://172.16.31.166

page from nfs filesystem



訪問測試成功!


自此,一個由corosync+pacemaker+crmsh構建的web高可用性集群就構建成功!


本文出自 “眼眸刻著你的微笑” 博客,請務必保留此出處http://dengaosky.blog.51cto.com/9215128/1964586

corosync+pacemaker+crmsh的高可用web集群的實現