1. 程式人生 > >高可用rabbitmq集群服務部署步驟

高可用rabbitmq集群服務部署步驟

步驟 file 基本 0.10 lar note stat 同事 cli

消息隊列是非常基礎的關鍵服務,為保證公司隊列服務的高可用及負載均衡,現通過如下方式實現: RabbitMQ Cluster + Queue HA + Haproxy + Keepalived


3臺rabbitMQ服務器構建broker集群,允許2臺服務器故障而服務不受影響,


在此基礎上,通過queue mirror實現隊列的高可用,在本例中鏡像到所有服務器,即1個master,2個slave;


為保證客戶端訪問入口地址的唯一性,通過haproxy做4層代理來提供mq服務,通過簡單的輪詢方式來進行負載均衡,設置健康檢查來屏蔽故障節點對客戶端的影響,


並通過2臺haproxy做keepalived實現客戶端訪問入口的高可用。




rabbitmq隊列基本概念參考:
http://baike.baidu.com/link?url=ySoVSgecyl7dcLNqyjvwXVW-nNTSA7tIHmhwTHx37hL_H4wnYa70VCqmOZ59AaSEz2DYyfUiSMnQV2tHKD7OQK


參考官方文檔:
http://www.rabbitmq.com/clustering.html
http://www.rabbitmq.com/ha.html
http://www.rabbitmq.com/man/rabbitmqctl.1.man.html


http://www.rabbitmq.com/production-checklist.html





一、基礎知識:


rabbitmq集群:RabbitMQ broker 集群是多個erlang節點的邏輯組,每個節點運行rabbitmq應用,他們之間共享用戶、虛擬主機、隊列、exchange、綁定和運行時參數。


集群之間復制什麽信息:除了message queue(存在一個節點,從其他節點都可見、訪問該隊列,要實現queue的復制就需要做queue的HA)之外,任何一個rabbitmq broker上的所有操作的data和state都會在所有的節點之間進行復制。


集群運行的前提:
1、集群所有節點必須運行相同的erlang及rabbitmq版本
2、hostname解析,節點之間通過域名相互通信,本文為3個node的集群,采用配置hosts的形式。


端口及用途
5672 客戶端連接用途
15672 web管理接口
25672 集群通信用途


集群的搭建方式:
1、通過rabbitmqctl手工配置 (本文采用此方式)
2、通過配置文件聲明
3、通過rabbitmq-autocluster插件聲明
4、通過rabbitmq-clusterer插件聲明


集群故障處理機制:
1、rabbitmq broker集群允許個體節點down機,
2、對應集群的的網絡分區問題( network partitions)
集群推薦用於LAN環境,不適用WAN環境;
要通過WAN連接broker,Shovel or Federation插件是最佳的解決方案。
Shovel or Federation不同於集群。


RabbitMQ clustering has several modes of dealing with network partitions, primarily consistency oriented. Clustering is meant to be used across LAN. It is not recommended to run clusters that span WAN. The Shovel or Federation plugins are better solutions for connecting brokers across a WAN. Note that Shovel and Federation are not equivalent to clustering.


節點運行模式:
為保證數據持久性,目前所有node節點跑在disk模式,如果今後壓力大,需要提高性能,考慮采用ram模式




集群節點之間是如何相互認證的:
通過Erlang Cookie,相當於共享秘鑰的概念,長度任意,只要所有節點都一致即可。
rabbitmq server在啟動的時候,erlang VM會自動創建一個隨機的cookie文件。
cookie文件的位置: /var/lib/rabbitmq/.erlang.cookie 或者/root/.erlang.cookie
我們的放在:/root/.erlang.cookie
為保證cookie的完全一致,采用從一個節點copy的方式。





二、RabbitMQ集群部署過程


首先安裝單機版rabbitMQ,參考同事寫的文檔:部署erlang環境和rabbitmq的文檔


下面開始集群配置過程:
1、設置hosts解析,所有節點配置相同
[root@xx_rabbitMQ135 ~]# tail -n4 /etc/hosts
###rabbitmq 集群通信用途,所有節點配置一致 laijingli 20160220
192.168.100.135 xx_rabbitMQ135
192.168.100.136 xx_rabbitMQ136
192.168.100.137 xx_rabbitMQ137


2、設置節點間認證的cookie
[root@xx_rabbitMQ135 ~]# scp /root/.erlang.cookie 192.168.100.136:~
[root@xx_rabbitMQ135 ~]# scp /root/.erlang.cookie 192.168.100.137:~




3、分別啟動獨立的單機版rabbitmq broker節點:
[root@xx_rabbitMQ135 ~]# rabbitmq-server -detached
[root@xx_rabbitMQ136 ~]# rabbitmq-server -detached
[root@xx_rabbitMQ137 ~]# rabbitmq-server -detached


這樣就在每個節點上創建了獨立的RabbitMQ brokers,


查看broker的狀態:
[root@xx_rabbitMQ135 ~]# rabbitmqctl status
Status of node rabbit@xx_rabbitMQ135 ...
[{pid,116968},
{running_applications,
[{rabbitmq_shovel_management,"Shovel Status","3.6.0"},
{rabbitmq_management,"RabbitMQ Management Console","3.6.0"},


查看broker的集群狀態:
[root@xx_rabbitMQ135 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@xx_rabbitMQ135 ...
[{nodes,[{disc,[rabbit@xx_rabbitMQ135]}]},
{running_nodes,[rabbit@xx_rabbitMQ135]},
{cluster_name,<<"rabbit@xx_rabbitMQ135">>},
{partitions,[]}]




4、創建broker集群:
為了把集群中的3歌節點聯系起來,我們把136和137分別加入到135的集群


先在136上stop rabbitmq,然後加到135的集群(join cluster會隱式的重置該節點,並刪除該節點上所有的資源和數據),然後查看集群狀態裏有了2個node。
[root@xx_rabbitMQ136 ~]# rabbitmqctl stop_app
Stopping node rabbit@xx_rabbitMQ136 ...


[root@xx_rabbitMQ136 ~]# rabbitmqctl join_cluster rabbit@xx_rabbitMQ135
Clustering node rabbit@xx_rabbitMQ136 with rabbit@xx_rabbitMQ135 ...


[root@xx_rabbitMQ136 ~]# rabbitmqctl start_app


[root@xx_rabbitMQ136 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@xx_rabbitMQ136 ...
[{nodes,[{disc,[rabbit@xx_rabbitMQ135,rabbit@xx_rabbitMQ136]}]}]


137同理,加入集群時選擇135或者136哪個node都沒有影響。
[root@xx_rabbitMQ135 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@xx_rabbitMQ135 ...
[{nodes,[{disc,[rabbit@xx_rabbitMQ135,rabbit@xx_rabbitMQ136,
rabbit@xx_rabbitMQ137]}]},
{running_nodes,[rabbit@xx_rabbitMQ136,rabbit@xx_rabbitMQ137,
rabbit@xx_rabbitMQ135]},
{cluster_name,<<"rabbit@xx_rabbitMQ135">>},
{partitions,[]}]





修改集群的名字為xx_rabbitMQ_cluster(默認是第一個node的名字):
[root@xx_rabbitMQ135 ~]# rabbitmqctl set_cluster_name xx_rabbitMQ_cluster


[root@xx_rabbitMQ135 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@xx_rabbitMQ135 ...
[{nodes,[{disc,[rabbit@xx_rabbitMQ135,rabbit@xx_rabbitMQ136,
rabbit@xx_rabbitMQ137]}]},
{running_nodes,[rabbit@xx_rabbitMQ135,rabbit@xx_rabbitMQ136,
rabbit@xx_rabbitMQ137]},
{cluster_name,<<"xx_rabbitMQ_cluster">>},
{partitions,[]}]




5、重啟集群:
通過rabbitmqctl stop、rabbitmq-server -detached來重啟集群,觀察集群的運行狀態變化


重要信息:
(1)、當整個集群down掉時,最後一個down機的節點必須第一個啟動到在線狀態,如果不是這樣,節點會等待30s等最後的磁盤節點恢復狀態,然後失敗。
如果最後下線的節點不能上線,可以通過forget_cluster_node 指令來踢出集群。
When the entire cluster is brought down, the last node to go down must be the first node to be brought online. If this doesn‘t happen, the nodes will wait 30 seconds for the last disc node to come back online, and fail afterwards.
If the last node to go offline cannot be brought back up, it can be removed from the cluster using the forget_cluster_node command - consult the rabbitmqctl manpage for more information.


(2)、如果所有的節點不受控制的同時宕機,比如掉電,會進入所有的節點都會認為其他節點比自己宕機的要晚,即自己先宕機,這種情況下可以使用force_boot指令來啟動一個節點。
If all cluster nodes stop in a simultaneous and uncontrolled manner (for example with a power cut) you can be left with a situation in which all nodes think that some other node stopped after them. In this case you can use the force_boot command on one node to make it bootable again - consult the rabbitmqctl manpage for more information.




6、打破集群:
當一個節點不屬於這個集群的時候,我們需要顯式的踢出,可以通過本地或者遠程的方式


[root@xx_rabbitMQ137 ~]# rabbitmqctl stop_app
Stopping node rabbit@xx_rabbitMQ137 ...


[root@xx_rabbitMQ137 ~]# rabbitmqctl reset
Resetting node rabbit@xx_rabbitMQ137 ...


[root@xx_rabbitMQ137 ~]# rabbitmqctl start_app
Starting node rabbit@xx_rabbitMQ137 ...


[root@xx_rabbitMQ137 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@xx_rabbitMQ137 ...
[{nodes,[{disc,[rabbit@xx_rabbitMQ137]}]},
{running_nodes,[rabbit@xx_rabbitMQ137]},
{cluster_name,<<"rabbit@xx_rabbitMQ137">>},
{partitions,[]}]




7、客戶端連接集群測試


通過web管理頁面進行創建隊列、發布消息、創建用戶、創建policy等
http://192.168.100.137:15672/


或者通過rabbitmqadmin命令行來測試


[root@xx_rabbitMQ136 ~]# wget http://192.168.100.136:15672/cli/rabbitmqadmin


[root@xx_rabbitMQ136 ~]# chmod +x rabbitmqadmin


[root@xx_rabbitMQ136 ~]# mv rabbitmqadmin /usr/local/rabbitmq_server-3.6.0/sbin/


Declare an exchange
$ rabbitmqadmin declare exchange name=my-new-exchange type=fanout
exchange declared
Declare a queue, with optional parameters
$ rabbitmqadmin declare queue name=my-new-queue durable=false
queue declared
Publish a message
$ rabbitmqadmin publish exchange=my-new-exchange routing_key=test payload="hello, world"
Message published
And get it back
$ rabbitmqadmin get queue=test requeue=false
+-------------+----------+---------------+--------------+------------------+-------------+
| routing_key | exchange | message_count | payload | payload_encoding | redelivered |
+-------------+----------+---------------+--------------+------------------+-------------+
| test | | 0 | hello, world | string | False |
+-------------+----------+---------------+--------------+------------------+-------------+





測試後發現問題問題:
[root@xx_rabbitMQ135 ~]# rabbitmqctl stop_app
[root@xx_rabbitMQ135 ~]# rabbitmqctl stop
在stop_app或者stop掉broker之後在135節點的上隊列已經不可用了,重啟135的app或broker之後,雖然集群工作正常,但135上隊列中消息會被清空(queue還是存在的)


對於生產環境而已,這肯定是不可接受的,如果不能保證隊列的高可用,那麽做集群的意義也不太大了,還好rabbitmq支持Highly Available Queues,下面介紹queue的HA。





三、Queue HA配置


默認情況下,集群中的隊列存在於集群中單個節點上,這要看創建隊列時聲明在那個節點上創建,而exchange和binding則默認存在於集群中所有節點。
隊列可以通過鏡像來提高可用性,HA依賴rabbitmq cluster,所以隊列鏡像也不適合WAN部署,每個被鏡像的隊列包含一個master和一個或者多個slave,當master因任何原因故障時,最老的slave被提升為新的master。
發布到隊列的消息被復制到所有的slave上,消費者無論連接那個node,都會連接到master;如果master確認要刪除消息,那麽所有slave就會刪除隊列中消息。
隊列鏡像可以提供queue的高可用性,但不能分擔負載,因為所有參加的節點都做所有的工作。




1、配置隊列鏡像
通過policy來配置鏡像,策略可在任何時候創建,比如先創建一個非鏡像的隊列,然後在鏡像,反之亦然。
鏡像隊列和非鏡像隊列的區別是非鏡像隊列沒有slaves,運行速度也比鏡像隊列快。


設置策略然後設置ha-mode,3中模式:all、exactly、nodes
每個隊列都有一個home node,叫做queue master node


(1)、設置policy,以ha.開頭的隊列將會被鏡像到集群其他所有節點,一個節點掛掉然後重啟後需要手動同步隊列消息
rabbitmqctl set_policy ha-all-queue "^ha\." ‘{"ha-mode":"all"}‘


(2)、設置policy,以ha.開頭的隊列將會被鏡像到集群其他所有節點,一個節點掛掉然後重啟後會自動同步隊列消息(我們生產環境采用這個方式)
rabbitmqctl set_policy ha-all-queue "^ha\." ‘{"ha-mode":"all","ha-sync-mode":"automatic"}‘




2、問題:


配置鏡像隊列後,其中1臺節點失敗,隊列內容是不會丟失,如果整個集群重啟,隊列中的消息內容仍然丟失,如何實現隊列消息內容持久化那?
我的node也是跑在disk模式,創建見消息的時候也聲明了持久化,為什麽還是不行那?


因為創建消息的時候需要指定消息是否持久化,如果啟用了消息的持久化的話,重啟集群消息也不會丟失了,前提是創建的隊列也應該是創建的持久化隊列。




四、客戶端連接rabbitMQ集群服務的方式:
1、客戶端可以連接集群中的任意一個節點,如果一個節點故障,客戶端自行重新連接到其他的可用節點;(不推薦,對客戶端不透明)
2、通過動態DNS,較短的ttl
3、通過HA+4層負載均衡器(比如haproxy+keepalived)





五、Haproxy+keepalived的部署


消息隊列作為公司的關鍵基礎服務,為給客戶端提供穩定、透明的rabbitmq服務,現通過Haproxy+keepalived構建高可用的rabbitmq統一入口,及基本的負載均衡服務。


為簡化安裝配置,現采用yum的方式安裝haproxy和keepalived,可參考 基於keepalived+nginx部署強健的高可用7層負載均衡方案


1、安裝


yum install haproxy keepalived -y


2、設置關鍵服務開機自啟動


[root@xxhaproxy101 keepalived]# chkconfig --list|grep haproxy
haproxy 0:off 1:off 2:off 3:off 4:off 5:off 6:off
[root@xxhaproxy101 keepalived]# chkconfig haproxy on
[root@xxhaproxy101 keepalived]# chkconfig --list|grep haproxy
haproxy 0:off 1:off 2:on 3:on 4:on 5:on 6:off


3、配置將haproxy的log記錄到 /var/log/haproxy.log


[root@xxhaproxy101 haproxy]# more /etc/rsyslog.d/haproxy.conf
$ModLoad imudp
$UDPServerRun 514


local0.* /var/log/haproxy.log


[root@xxhaproxy101 haproxy]# /etc/init.d/rsyslog restart





4、haproxy的配置,2臺機器上的配置完全相同


[root@xxhaproxy101 keepalived]# more /etc/haproxy/haproxy.cfg


#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------


#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the ‘-r‘ option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2 notice


chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon


# turn on stats unix socket
stats socket /var/lib/haproxy/stats


#---------------------------------------------------------------------
# common defaults that all the ‘listen‘ and ‘backend‘ sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode tcp
option tcplog
option dontlognull
option http-server-close
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000




###haproxy statistics monitor by laijingli 20160222
listen statics 0.0.0.0:8888
mode http
log 127.0.0.1 local0 debug
transparent
stats refresh 60s
stats uri / haproxy-stats
stats realm Haproxy \ statistic
stats auth laijingli:xxxxx


#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend xx_rabbitMQ_cluster_frontend
mode tcp
option tcpka
log 127.0.0.1 local0 debug
bind 0.0.0.0:5672
use_backend xx_rabbitMQ_cluster_backend


frontend xx_rabbitMQ_cluster_management_frontend
mode tcp
option tcpka
log 127.0.0.1 local0 debug
bind 0.0.0.0:15672
use_backend xx_rabbitMQ_cluster_management_backend


#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend xx_rabbitMQ_cluster_backend
balance roundrobin
server xx_rabbitMQ135 192.168.100.135:5672 check inter 3s rise 1 fall 2
server xx_rabbitMQ136 192.168.100.136:5672 check inter 3s rise 1 fall 2
server xx_rabbitMQ137 192.168.100.137:5672 check inter 3s rise 1 fall 2


backend xx_rabbitMQ_cluster_management_backend
balance roundrobin
server xx_rabbitMQ135 192.168.100.135:15672 check inter 3s rise 1 fall 2
server xx_rabbitMQ136 192.168.100.136:15672 check inter 3s rise 1 fall 2
server xx_rabbitMQ137 192.168.100.137:15672 check inter 3s rise 1 fall 2


[root@xxhaproxy101 keepalived]#





5、貼出keepalived的配置,因HA服務器上同時運行的http及rabbitmq的服務,故一並貼出來了,特別註意2臺服務器上的keepalived配置不一樣


[root@xxhaproxy101 keepalived]# more /etc/keepalived/keepalived.conf
####Configuration File for keepalived
####xx公司線上內部API網關 keepalived HA配置
####xx公司線上rabbitMQ集群keepalived HA配置
#### laijingli 20151213


global_defs {
notification_email {
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id xxhaproxy101 ## xxhaproxy101 on master , xxhaproxy102 on backup
}




###simple check with killall -0 which is less expensive than pidof to verify that nginx is running
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight 2
fall 2
rise 1
}


vrrp_instance YN_API_GATEWAY {
state MASTER ## MASTER on master , BACKUP on backup
interface em1
virtual_router_id 101 ## YN_API_GATEWAY virtual_router_id
priority 200 ## 200 on master , 199 on backup
advert_int 1
###采用單播通信,避免同一個局域網中多個keepalived組之間的相互影響
unicast_src_ip 192.168.100.101 ##本機ip
unicast_peer {
192.168.100.102 ##對端ip
}
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.100.99 ## VIP
}
###如果只有一塊網卡的話監控網絡接口就沒有必要了
#track_interface {
# em1
#}
track_script {
chk_nginx
}
###狀態切換是發送郵件通知,本機記錄log,後期會觸發短信通知
notify_master /usr/local/bin/keepalived_notify.sh notify_master
notify_backup /usr/local/bin/keepalived_notify.sh notify_backup
notify_fault /usr/local/bin/keepalived_notify.sh notify_fault
notify /usr/local/bin/keepalived_notify.sh notify
smtp_alert
}




###simple check with killall -0 which is less expensive than pidof to verify that haproxy is running
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 1
weight 2
fall 2
rise 1
}
vrrp_instance xx_rabbitMQ_GATEWAY {
state BACKUP ## MASTER on master , BACKUP on backup
interface em1
virtual_router_id 111 ## xx_rabbitMQ_GATEWAY virtual_router_id
priority 199 ## 200 on master , 199 on backup
advert_int 1
###采用單播通信,避免同一個局域網中多個keepalived組之間的相互影響
unicast_src_ip 192.168.100.101 ##本機ip
unicast_peer {
192.168.100.102 ##對端ip
}
authentication {
auth_type PASS
auth_pass 123456
}
virtual_ipaddress {
192.168.100.100 ## VIP
}
###如果只有一塊網卡的話監控網絡接口就沒有必要了
#track_interface {
# em1
#}
track_script {
chk_haproxy
}
###狀態切換是發送郵件通知,本機記錄log,後期會觸發短信通知
notify_master /usr/local/bin/keepalived_notify_for_haproxy.sh notify_master
notify_backup /usr/local/bin/keepalived_notify_for_haproxy.sh notify_backup
notify_fault /usr/local/bin/keepalived_notify_for_haproxy.sh notify_fault
notify /usr/local/bin/keepalived_notify_for_haproxy.sh notify
smtp_alert
}





[root@xxhaproxy102 keepalived]# more /etc/keepalived/keepalived.conf
####Configuration File for keepalived
####xx公司線上內部API網關 keepalived HA配置
####xx公司線上rabbitMQ集群keepalived HA配置
#### laijingli 20151213


global_defs {
notification_email {
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id xxhaproxy102 ## xxhaproxy101 on master , xxhaproxy102 on backup
}




###simple check with killall -0 which is less expensive than pidof to verify that nginx is running
vrrp_script chk_nginx {
script "killall -0 nginx"
interval 1
weight 2
fall 2
rise 1
}


vrrp_instance YN_API_GATEWAY {
state BACKUP ## MASTER on master , BACKUP on backup
interface em1
virtual_router_id 101 ## YN_API_GATEWAY virtual_router_id
priority 199 ## 200 on master , 199 on backup
advert_int 1
###采用單播通信,避免同一個局域網中多個keepalived組之間的相互影響
unicast_src_ip 192.168.100.102 ##本機ip
unicast_peer {
192.168.100.101 ##對端ip
}
authentication {
auth_type PASS
auth_pass YN_API_HA_PASS
}
virtual_ipaddress {
192.168.100.99 ## VIP
}
###如果只有一塊網卡的話監控網絡接口就沒有必要了
#track_interface {
# em1
#}
track_script {
chk_nginx
}
###狀態切換是發送郵件通知,本機記錄log,後期會觸發短信通知
notify_master /usr/local/bin/keepalived_notify.sh notify_master
notify_backup /usr/local/bin/keepalived_notify.sh notify_backup
notify_fault /usr/local/bin/keepalived_notify.sh notify_fault
notify /usr/local/bin/keepalived_notify.sh notify
smtp_alert
}




###simple check with killall -0 which is less expensive than pidof to verify that haproxy is running
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 1
weight 2
fall 2
rise 1
}
vrrp_instance xx_rabbitMQ_GATEWAY {
state MASTER ## MASTER on master , BACKUP on backup
interface em1
virtual_router_id 111 ## xx_rabbitMQ_GATEWAY virtual_router_id
priority 200 ## 200 on master , 199 on backup
advert_int 1
###采用單播通信,避免同一個局域網中多個keepalived組之間的相互影響
unicast_src_ip 192.168.100.102 ##本機ip
unicast_peer {
192.168.100.101 ##對端ip
}
authentication {
auth_type PASS
auth_pass YN_MQ_HA_PASS
}
virtual_ipaddress {
192.168.100.100 ## VIP
}
###如果只有一塊網卡的話監控網絡接口就沒有必要了
#track_interface {
# em1
#}
track_script {
chk_haproxy
}
###狀態切換是發送郵件通知,本機記錄log,後期會觸發短信通知
notify_master /usr/local/bin/keepalived_notify_for_haproxy.sh notify_master
notify_backup /usr/local/bin/keepalived_notify_for_haproxy.sh notify_backup
notify_fault /usr/local/bin/keepalived_notify_for_haproxy.sh notify_fault
notify /usr/local/bin/keepalived_notify_for_haproxy.sh notify
smtp_alert
}





配置中用到的通知腳本,2臺服務器上完全一樣:


[root@xxhaproxy101 keepalived]# more /usr/local/bin/keepalived_notify.sh
#!/bin/bash
###keepalived notify script for record ha state transtion to log files


###將將狀態轉換過程記錄到log,便於排錯
logfile=/var/log/keepalived.notify.log
echo --------------- >> $logfile
echo `date` [`hostname`] keepalived HA role state transition: $1 $2 $3 $4 $5 $6 >> $logfile


###將狀態轉換記錄到nginx的文件,便於通過web查看ha狀態(一定註意不要開放到公網)
echo `date` `hostname` $1 $2 $3 $4 $5 $6 " <br>" > /usr/share/nginx/html/index_for_nginx.html


###將nginx api和rabbitmq的ha log記錄到同一個文件裏
cat /usr/share/nginx/html/index_for* > /usr/share/nginx/html/index.html







[root@xxhaproxy101 keepalived]# more /usr/local/bin/keepalived_notify_for_haproxy.sh
#!/bin/bash
###keepalived notify script for record ha state transtion to log files


###將將狀態轉換過程記錄到log,便於排錯
logfile=/var/log/keepalived.notify.log
echo --------------- >> $logfile
echo `date` [`hostname`] keepalived HA role state transition: $1 $2 $3 $4 $5 $6 >> $logfile


###將狀態轉換記錄到nginx的文件,便於通過web查看ha狀態(一定註意不要開放到公網)
echo `date` `hostname` $1 $2 $3 $4 $5 $6 " <br>" > /usr/share/nginx/html/index_for_haproxy.html


###將nginx api和rabbitmq的ha log記錄到同一個文件裏
cat /usr/share/nginx/html/index_for* > /usr/share/nginx/html/index.html
[root@xxhaproxy101 keepalived]#





6、haproxy監控頁面:


http://192.168.100.101:8888/


7、查看keepalived中高可用服務運行在那臺服務器上


http://192.168.100.101


8、通過VIP訪問rabbitMQ服務


192.168.100.100:5672




六、更多問題:


參考 xx公司rabbitmq服務客戶端使用規範
1、使用vhost來隔離不同的應用、不同的用戶、不同的業務組
2、消息持久化,exchange、queue、message等持久化需要在客戶端聲明指定

高可用rabbitmq集群服務部署步驟