MHA叢集(gtid複製)和vip漂移
在上一片部落格中,講述了怎麼去配置MHA架構!這片部落格不再細說,只說明其中MySQL主從搭建,這裡使用的是gtid加上半同步複製!
步驟與上一片部落格一樣,不同之處在於MySQL主從的搭建!詳細的gtid搭建過程https://www.cnblogs.com/wxzhe/p/10055154.html
上一片部落格中,把MySQL主從的搭建由filename和pos的過程改變為如下的基於gtid的過程就可以,因此不再詳細說明,只展示gtid的搭建!
四臺伺服器分配如下:
MHA管理節點: 10.0.102.214
MySQL主節點: 10.0.102.204
MySQL從節點1: 10.0.102.179 【這個節點可以作為備用的主】
MySQL從節點2: 10.0.102.221
搭建基於gtid的資料庫複製!
第一步:保證三個伺服器的資料是一致的【處於一致的狀態】
第二步:在主上和備用的主上建立複製賬戶,使用者名稱和密碼要保持一致!
第三步:開啟gtid,載入半同步複製的外掛!
三臺伺服器中配置檔案加入以下參照:
plugin_dir=/usr/local/mysql/lib/plugin/ #因為這裡MySQL5.7是原始碼安裝的位置,若是使用rpm包安裝,則注意更改位置 plugin_load=semisync_master.so #這兩個外掛儘量在每個伺服器都安裝吧 plugin_load=semisync_slave.so gtid-mode=on #開啟gtid enforce-gtid-consistency #確保gtid全域性的一致性 log-bin= character_set_server=utf8 #設定字符集 log_slave_updates #gtid複製時,一定要開啟
設定完配置檔案之後,重啟伺服器,然後再從上執行以下命令!
mysql> change master to master_host="10.0.102.204", master_user="repl",master_password="123456",master_auto_position = 1; Query OK, 0 rows affected, 2 warnings (0.09 sec) mysql> start slave; Query OK, 0 rows affected (0.01 sec)
若上面沒有報錯,則使用show slave status檢視複製的狀態!
檢查MHA的狀態
ssh狀態檢查:
[[email protected] ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf Sun Dec 9 11:42:50 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sun Dec 9 11:42:50 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:42:50 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:42:50 2018 - [info] Starting SSH connection tests.. Sun Dec 9 11:42:51 2018 - [debug] Sun Dec 9 11:42:50 2018 - [debug] Connecting via SSH from root@10.0.102.204(10.0.102.204:22) to root@10.0.102.179(10.0.102.179:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.204(10.0.102.204:22) to root@10.0.102.221(10.0.102.221:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.179(10.0.102.179:22) to root@10.0.102.204(10.0.102.204:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.179(10.0.102.179:22) to root@10.0.102.221(10.0.102.221:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [debug] Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.221(10.0.102.221:22) to root@10.0.102.204(10.0.102.204:22).. Sun Dec 9 11:42:52 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [debug] Connecting via SSH from root@10.0.102.221(10.0.102.221:22) to root@10.0.102.179(10.0.102.179:22).. Sun Dec 9 11:42:52 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [info] All SSH connection tests passed successfully.masterha_check_ssh --conf=/etc/masterha/app1.cnf
[[email protected] ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf Sun Dec 9 11:42:50 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sun Dec 9 11:42:50 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:42:50 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:42:50 2018 - [info] Starting SSH connection tests.. Sun Dec 9 11:42:51 2018 - [debug] Sun Dec 9 11:42:50 2018 - [debug] Connecting via SSH from root@10.0.102.204(10.0.102.204:22) to root@10.0.102.179(10.0.102.179:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.204(10.0.102.204:22) to root@10.0.102.221(10.0.102.221:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.179(10.0.102.179:22) to root@10.0.102.204(10.0.102.204:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.179(10.0.102.179:22) to root@10.0.102.221(10.0.102.221:22).. Sun Dec 9 11:42:51 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [debug] Sun Dec 9 11:42:51 2018 - [debug] Connecting via SSH from root@10.0.102.221(10.0.102.221:22) to root@10.0.102.204(10.0.102.204:22).. Sun Dec 9 11:42:52 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [debug] Connecting via SSH from root@10.0.102.221(10.0.102.221:22) to root@10.0.102.179(10.0.102.179:22).. Sun Dec 9 11:42:52 2018 - [debug] ok. Sun Dec 9 11:42:52 2018 - [info] All SSH connection tests passed successfully. [[email protected] ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf Sun Dec 9 11:43:39 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sun Dec 9 11:43:39 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:43:39 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sun Dec 9 11:43:39 2018 - [info] MHA::MasterMonitor version 0.56. Sun Dec 9 11:43:39 2018 - [info] GTID failover mode = 1 Sun Dec 9 11:43:39 2018 - [info] Dead Servers: Sun Dec 9 11:43:39 2018 - [info] Alive Servers: Sun Dec 9 11:43:39 2018 - [info] 10.0.102.204(10.0.102.204:3306) Sun Dec 9 11:43:39 2018 - [info] 10.0.102.179(10.0.102.179:3306) Sun Dec 9 11:43:39 2018 - [info] 10.0.102.221(10.0.102.221:3306) Sun Dec 9 11:43:39 2018 - [info] Alive Slaves: Sun Dec 9 11:43:39 2018 - [info] 10.0.102.179(10.0.102.179:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Sun Dec 9 11:43:39 2018 - [info] GTID ON Sun Dec 9 11:43:39 2018 - [info] Replicating from 10.0.102.204(10.0.102.204:3306) Sun Dec 9 11:43:39 2018 - [info] Primary candidate for the new Master (candidate_master is set) Sun Dec 9 11:43:39 2018 - [info] 10.0.102.221(10.0.102.221:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Sun Dec 9 11:43:39 2018 - [info] GTID ON Sun Dec 9 11:43:39 2018 - [info] Replicating from 10.0.102.204(10.0.102.204:3306) Sun Dec 9 11:43:39 2018 - [info] Not candidate for the new Master (no_master is set) Sun Dec 9 11:43:39 2018 - [info] Current Alive Master: 10.0.102.204(10.0.102.204:3306) Sun Dec 9 11:43:39 2018 - [info] Checking slave configurations.. Sun Dec 9 11:43:39 2018 - [info] read_only=1 is not set on slave 10.0.102.179(10.0.102.179:3306). Sun Dec 9 11:43:39 2018 - [info] read_only=1 is not set on slave 10.0.102.221(10.0.102.221:3306). Sun Dec 9 11:43:39 2018 - [info] Checking replication filtering settings.. Sun Dec 9 11:43:39 2018 - [info] binlog_do_db= , binlog_ignore_db= Sun Dec 9 11:43:39 2018 - [info] Replication filtering check ok. Sun Dec 9 11:43:39 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking. Sun Dec 9 11:43:39 2018 - [info] Checking SSH publickey authentication settings on the current master.. Sun Dec 9 11:43:39 2018 - [info] HealthCheck: SSH to 10.0.102.204 is reachable. Sun Dec 9 11:43:39 2018 - [info] 10.0.102.204(10.0.102.204:3306) (current master) +--10.0.102.179(10.0.102.179:3306) +--10.0.102.221(10.0.102.221:3306) Sun Dec 9 11:43:39 2018 - [info] Checking replication health on 10.0.102.179.. Sun Dec 9 11:43:39 2018 - [info] ok. Sun Dec 9 11:43:39 2018 - [info] Checking replication health on 10.0.102.221.. Sun Dec 9 11:43:39 2018 - [info] ok. Sun Dec 9 11:43:39 2018 - [info] Checking master_ip_failover_script status: Sun Dec 9 11:43:39 2018 - [info] /usr/local/bin/master_ip_failover --ssh_user=root --command=status --ssh_user=root --orig_master_host=10.0.102.204 --orig_master_ip=10.0.102.204 --orig_master_port=3306 IN SCRIPT TEST====service keepalived stop==service keepalived start=== Checking the Status of the script.. OK Sun Dec 9 11:43:39 2018 - [info] OK. Sun Dec 9 11:43:39 2018 - [warning] shutdown_script is not defined. Sun Dec 9 11:43:39 2018 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK. [[email protected]
複製狀態檢查:
Sun Dec 9 12:14:28 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Sun Dec 9 12:14:28 2018 - [info] Reading application default configuration from /etc/masterha/app1.cnf.. Sun Dec 9 12:14:28 2018 - [info] Reading server configuration from /etc/masterha/app1.cnf.. Sun Dec 9 12:14:28 2018 - [info] MHA::MasterMonitor version 0.56. Sun Dec 9 12:14:28 2018 - [info] GTID failover mode = 1 Sun Dec 9 12:14:28 2018 - [info] Dead Servers: Sun Dec 9 12:14:28 2018 - [info] Alive Servers: Sun Dec 9 12:14:28 2018 - [info] 10.0.102.204(10.0.102.204:3306) Sun Dec 9 12:14:28 2018 - [info] 10.0.102.179(10.0.102.179:3306) Sun Dec 9 12:14:28 2018 - [info] 10.0.102.221(10.0.102.221:3306) Sun Dec 9 12:14:28 2018 - [info] Alive Slaves: Sun Dec 9 12:14:28 2018 - [info] 10.0.102.179(10.0.102.179:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Sun Dec 9 12:14:28 2018 - [info] GTID ON Sun Dec 9 12:14:28 2018 - [info] Replicating from 10.0.102.204(10.0.102.204:3306) Sun Dec 9 12:14:28 2018 - [info] Primary candidate for the new Master (candidate_master is set) Sun Dec 9 12:14:28 2018 - [info] 10.0.102.221(10.0.102.221:3306) Version=5.7.22-log (oldest major version between slaves) log-bin:enabled Sun Dec 9 12:14:28 2018 - [info] GTID ON Sun Dec 9 12:14:28 2018 - [info] Replicating from 10.0.102.204(10.0.102.204:3306) Sun Dec 9 12:14:28 2018 - [info] Not candidate for the new Master (no_master is set) Sun Dec 9 12:14:28 2018 - [info] Current Alive Master: 10.0.102.204(10.0.102.204:3306) Sun Dec 9 12:14:28 2018 - [info] Checking slave configurations.. Sun Dec 9 12:14:28 2018 - [info] read_only=1 is not set on slave 10.0.102.179(10.0.102.179:3306). Sun Dec 9 12:14:28 2018 - [info] read_only=1 is not set on slave 10.0.102.221(10.0.102.221:3306). Sun Dec 9 12:14:28 2018 - [info] Checking replication filtering settings.. Sun Dec 9 12:14:28 2018 - [info] binlog_do_db= , binlog_ignore_db= Sun Dec 9 12:14:28 2018 - [info] Replication filtering check ok. Sun Dec 9 12:14:28 2018 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking. Sun Dec 9 12:14:28 2018 - [info] Checking SSH publickey authentication settings on the current master.. Sun Dec 9 12:14:28 2018 - [info] HealthCheck: SSH to 10.0.102.204 is reachable. Sun Dec 9 12:14:28 2018 - [info] 10.0.102.204(10.0.102.204:3306) (current master) +--10.0.102.179(10.0.102.179:3306) +--10.0.102.221(10.0.102.221:3306) Sun Dec 9 12:14:28 2018 - [info] Checking replication health on 10.0.102.179.. Sun Dec 9 12:14:28 2018 - [info] ok. Sun Dec 9 12:14:28 2018 - [info] Checking replication health on 10.0.102.221.. Sun Dec 9 12:14:28 2018 - [info] ok. Sun Dec 9 12:14:28 2018 - [info] Checking master_ip_failover_script status: Sun Dec 9 12:14:28 2018 - [info] /usr/local/bin/master_ip_failover --ssh_user=root --command=status --ssh_user=root --orig_master_host=10.0.102.204 --orig_master_ip=10.0.102.204 --orig_master_port=3306 IN SCRIPT TEST====service keepalived stop==service keepalived start=== Checking the Status of the script.. OK Sun Dec 9 12:14:28 2018 - [info] OK. Sun Dec 9 12:14:28 2018 - [warning] shutdown_script is not defined. Sun Dec 9 12:14:28 2018 - [info] Got exit code 0 (Not master dead). MySQL Replication Health is OK.masterha_check_repl --conf=/etc/masterha/app1.cnf
複製狀態資訊中會顯示gtid的資訊提示!
若上面兩步檢查都沒有錯誤,可以開啟MHA的監控!
nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover &
檢視MHA的執行狀態:
[[email protected] ~]# masterha_check_status --conf=/etc/masterha/app1.cnf app1 (pid:22124) is running(0:PING_OK), master:10.0.102.204 #顯示app1正在執行,叢集中主伺服器為10.0.102.204
這時候MHA叢集已經搭建完畢,只要停掉主伺服器,就會自動選擇備用的從作為主伺服器的!
配置VIP漂移
目前叢集master是204這臺伺服器,想一個場景,前端應用正在連線這臺數據庫,然後master伺服器因某種原因宕掉了,這時候在MHA看來,我們可以把備用的主也就是179這臺伺服器作為新的master,維持叢集的正常執行;但是對於前端應用來說,不可能叢集沒宕一次就修改一次原始碼中的資料庫的IP地址。我們需要一個VIP來和前端相連,這個VIP總是指向正常執行資料庫伺服器!
MHA引入VIP有兩種方式,一種是使用keepalive,另一種是使用MHA自帶的指令碼!
使用keepalive配置VIP漂移
按照keepalive軟體,在當前的主和要作為備份的主上,按照keepalive!
#直接yum按照即可 yum install -y keepalived
然後編輯配置檔案,yum安裝預設配置在/etc/下面
#當前主伺服器的配置檔案
cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id MYSQL_HA } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.102.110 } }
#當前作為備份主的伺服器的配置檔案 cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id MYSQL_HA } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.0.102.110 } }
注意:上面兩臺伺服器的keepalived都設定為了BACKUP模式,在keepalived中2種模式,分別是master->backup模式和backup->backup模式。這兩種模式有很大區別。在master->backup模式下,一旦主庫宕機,虛擬ip會自動漂移到從庫,當主庫修復後,keepalived啟動後,還會把虛擬ip搶佔過來,即使設定了非搶佔模式(nopreempt)搶佔ip的動作也會發生。在backup->backup模式下,當主庫宕機後虛擬ip會自動漂移到從庫上,當原主庫恢復和keepalived服務啟動後,並不會搶佔新主的虛擬ip,即使是優先順序高於從庫的優先級別,也不會發生搶佔。為了減少ip漂移次數,通常是把修復好的主庫當做新的備庫。
keepalive配置完成之後,啟動keepalive,可以先測試keepalive是否會完成VIP的漂移!
keepalive配置完成之後,可以設定failover指令碼!
需要注意的是,這裡我們要測試VIP漂移,因此需要指定failover的指令碼位置,在配置檔案中指定!
[[email protected] masterha]# cat masterha_default.cnf [server default] user=root password=123456 ssh_user=root ssh_port=22 ping_interval=3 repl_user=repl repl_password=123456 master_binlog_dir= /data/mysql/ remote_workdir=/data/log/masterha secondary_check_script= masterha_secondary_check -s test1 -s mgt01 --user=root --port=22 --master_host=test2 --master_port=3306
#failover引數位置
master_ip_failover_script= /usr/local/bin/master_ip_failover --ssh_user=root # shutdown_script= /script/masterha/power_manager #report_script= /usr/local/bin/send_report
#線上修改腳步位置, master_ip_online_change_script= /script/masterha/master_ip_online_change --ssh_user=root
failover腳步需要自己編輯,在網上找了http://www.ywnds.com/?p=8116 這片部落格中給出的failover腳步,測試的時候有點問題,MHA得不到failover的狀態!我自己改了一下【刪了兩個變數的引用】,能用,如下!
#!/usr/bin/env perl use strict; use warnings FATAL => 'all'; use Getopt::Long; my ( $command, $ssh_user, $orig_master_host, $orig_master_ip, $orig_master_port, $new_master_host, $new_master_ip, $new_master_port ); my $ssh_start_vip = "service keepalived start"; #my $ssh_start_vip = "systemctl start keepalived.service"; #my $ssh_stop_vip = "systemctl stop keepalived.service"; my $ssh_stop_vip = "service keepalived stop"; GetOptions( 'command=s' => \$command, 'ssh_user=s' => \$ssh_user, 'orig_master_host=s' => \$orig_master_host, 'orig_master_ip=s' => \$orig_master_ip, 'orig_master_port=i' => \$orig_master_port, 'new_master_host=s' => \$new_master_host, 'new_master_ip=s' => \$new_master_ip, 'new_master_port=i' => \$new_master_port, ); exit &main(); sub main { print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n"; if ( $command eq "stop" || $command eq "stopssh" ) { my $exit_code = 1; eval { print "Disabling the VIP on old master: $orig_master_host \n"; &stop_vip(); $exit_code = 0; }; if ([email protected]) { warn "Got Error: [email protected]\n"; exit $exit_code; } exit $exit_code; } elsif ( $command eq "start" ) { my $exit_code = 10; eval { print "Enabling the VIP on the new master - $new_master_host \n"; &start_vip(); $exit_code = 0; }; if ([email protected]) { warn [email protected]; exit $exit_code; } exit $exit_code; } elsif ( $command eq "status" ) { print "Checking the Status of the script.. OK \n"; #`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`; exit 0; } else { &usage(); exit 1; } } # A simple system call that enable the VIP on the new master sub start_vip() { `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`; } # A simple system call that disable the VIP on the old_master sub stop_vip() { return 0 unless ($ssh_user); `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`; } sub usage { print "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n"; }cat master_ip_failover
配置了指令碼之後,再次檢查複製狀態:
masterha_check_repl --conf=/etc/masterha/app1.cnf
若複製狀態檢查中,顯示failover錯誤的資訊,可以單獨執行下面的命令,檢視指令碼的狀態及報錯提示!
/usr/local/bin/master_ip_failover --ssh_user=root --command=status --ssh_user=root --orig_master_host=10.0.102.179 --orig_master_ip=10.0.102.179 --orig_master_port=3306
開啟MHA的監控狀態,若正常啟動沒有報錯,則說明配置完成!
當前的狀態是:
204的伺服器是當前的master,並且有VIP。
179伺服器上是備用的主。
221伺服器僅是從!
停掉204的資料庫服務,檢視vip是否會轉移到179上,叢集的主伺服器是否也會轉到179上?
停掉204的資料庫
[[email protected] ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:1d:ae:12:52:00 brd ff:ff:ff:ff:ff:ff inet 10.0.102.204/22 brd 10.0.103.255 scope global eth0 inet 10.0.102.110/32 scope global eth0 inet6 fe80::f81d:aeff:fe12:5200/64 scope link valid_lft forever preferred_lft forever [[email protected] ~]# service mysqld stop #停掉資料庫 Shutting down MySQL............ SUCCESS!
[[email protected] ~]# ip addr #可以看到VIP已經不在204上了
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether fa:1d:ae:12:52:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.102.204/22 brd 10.0.103.255 scope global eth0
inet6 fe80::f81d:aeff:fe12:5200/64 scope link
valid_lft forever preferred_lft forever
[[email protected] ~]#
檢視179伺服器的VIP:
#可以看到VIP已經漂移,並且179伺服器已經變為master伺服器!
[[email protected] ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether fa:bc:66:8d:2e:00 brd ff:ff:ff:ff:ff:ff inet 10.0.102.179/22 brd 10.0.103.255 scope global eth0 inet 10.0.102.110/32 scope global eth0 inet6 fe80::f8bc:66ff:fe8d:2e00/64 scope link valid_lft forever preferred_lft forever [[email protected] ~]# mysql -e "show processlist;" +----+------+------------+------+------------------+------+---------------------------------------------------------------+------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+------+------------+------+------------------+------+---------------------------------------------------------------+------------------+ | 15 | repl | mgt01:2735 | NULL | Binlog Dump GTID | 135 | Master has sent all binlog to slave; waiting for more updates | NULL | | 16 | root | localhost | NULL | Query | 0 | starting | show processlist | +----+------+------------+------+------------------+------+---------------------------------------------------------------+------------------+
還可以MHA的日誌:
cat /data/log/app1/manager.log
上面使用keepalive來做高可用,完成了VIP的漂移的測試,下面使用MHA自帶的指令碼來測試一下!