1. 程式人生 > >MySQL之MHA+keepalived方案演示(一)

MySQL之MHA+keepalived方案演示(一)

keygen centos6.5 out too send 停止 keepaliv o_direct failover

整個MHA集群環境搭建過程演示

一. 實驗環境說明

安裝MHA操作步驟
MHA節點包含三個腳本,依賴perl模塊。
save_binary_logs:保存和復制當掉的主服務器二進制日誌
apply_diff_relay_logs:識別差異的relay log事件,並應用於其他salve服務器
purge_relay_logs:清除relay log文件
需要在所有mysql服務器上安裝MHA節點,MHA管理服務器也需要安裝。MHA管理節點模塊內部依賴MHA節點模塊。MHA管理節點通過ssh連接管理mysql服務器和執行MHA節點腳本。MHA節點依賴perl的DBD::mysql模塊。

1.1. 環境簡介

1.1.1、vmvare虛擬機,系統版本CentOS6.5 x86_64位最小化安裝,mysql的版本5.7.21,
1.1.2、虛擬機器的ssh端口均為默認22,
1.1.3、虛擬機的iptables全部關閉,
1.1.4、虛擬機的selinux全部關閉,
1.1.5、虛擬機服務器時間全部一致 ntpdate 0.asia.pool.ntp.org
1.1.6、3臺機器的ssh端口為12570

1.2、此次試驗采用的是3臺機器,機器具體部署如下:

角色                   IP地址(內網)    主機名稱      節點機器部署服務                                  業務用途
Master               192.168.2.128     server02      mha4mysql-node-0.56-0.el6                      寫入
                                                                                mha4mysql-manager-0.56-0.el6   
                                                                                 keepalived

slave(備master) 192.168.2.129     server03      mha4mysql-node-0.56-0.el6                    讀
                                                                                    keepalived

Slave+Monitor   192.168.2.130     server04      mha4mysql-node-0.56-0.el6       讀+備份數據

1.3說明介紹:

server03和server04是server02的slave從庫,復制環境搭建後面會簡單演示,其中master對外提供寫服務,備選master(實際的slave,主機名server03)提供讀服務,slave也提供相關的讀服務,一旦master宕機,將會把備
選備master提升為新的master,slave指向新的master
server04上部署Monitor(MHA Manager監控),主要是監控主從復制的集群中主庫master是否正常,一旦master掛掉,MHA Manager會自動完成主庫和slave從庫的自動切換

二.HMA具體部署過程

2.1主從復制的搭建

2.1.1 主庫主配置文件(192.168.2.128機器)


[root@server02 ~]# cat /etc/my.cnf

[client]
port            = 3306
socket          = /tmp/mysql.sock
[mysql]
no-auto-rehash
[mysqld]
user    = mysql
port    = 3306
socket  = /tmp/mysql.sock
basedir = /usr/local/mysql
datadir = /data/mysql/data
back_log = 2000
open_files_limit = 1024
max_connections = 800
max_connect_errors = 3000
max_allowed_packet = 33554432
external-locking = FALSE
character_set_server = utf8
#binlog
log-slave-updates = 1
binlog_format = row
log-bin = /data/mysql/logs/bin-log/mysql-bin
expire_logs_days = 5
sync_binlog = 1
binlog_cache_size = 1M
max_binlog_cache_size = 1M
max_binlog_size = 2M
#replicate-ignore-db=mysql
skip-name-resolve
slave-skip-errors = 1032,1062,
##skip_slave_start=1
skip_slave_start=0
##read_only=1
##relay_log_purge=0
###relay log
relay-log = /data/mysql/logs/relay-log/relay-bin
relay-log-info-file = /data/mysql/relay-log.info
###slow_log
slow_query_log = 1
slow-query-log-file = /data/mysql/logs/mysql-slow.log
log-error = /data/mysql/logs/error.log
##GTID
server_id = 1103
##gtid_mode=on
##enforce_gtid_consistency=on
event_scheduler = ON
innodb_autoinc_lock_mode = 1
innodb_buffer_pool_size = 10737418
innodb_data_file_path = ibdata1:10M:autoextend
innodb_data_home_dir = /data/mysql/data
innodb_log_group_home_dir = /data/mysql/data
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_io_capacity = 2000
innodb_log_buffer_size = 8388608
innodb_log_files_in_group = 3
innodb_max_dirty_pages_pct = 50
innodb_open_files = 512
innodb_read_io_threads = 8
innodb_thread_concurrency = 20
innodb_write_io_threads = 8
innodb_lock_wait_timeout = 10
innodb_buffer_pool_load_at_startup = 1
innodb_buffer_pool_dump_at_shutdown = 1
key_buffer_size = 3221225472
innodb_log_file_size = 1G
local_infile = 1
log_bin_trust_function_creators = 1
log_output = FILE
long_query_time = 1
myisam_sort_buffer_size = 33554432
join_buffer_size = 8388608
tmp_table_size = 33554432
net_buffer_length = 8192
performance_schema = 1
performance_schema_max_table_instances = 200
query_cache_size = 0
query_cache_type = 0
read_buffer_size = 20971520
read_rnd_buffer_size = 16M
max_heap_table_size = 33554432
bulk_insert_buffer_size = 134217728
secure-file-priv = /data/mysql/tmp
sort_buffer_size = 2097152
table_open_cache = 128
thread_cache_size = 50
tmpdir = /data/mysql/tmp
slave-load-tmpdir = /data/mysql/tmp
wait_timeout = 120
transaction_isolation=read-committed
innodb_flush_log_at_trx_commit=0
lower_case_table_names=1
[mysqldump]
quick
max_allowed_packet = 64M

[mysqld_safe]
log-error = /data/mysql/logs/error.log
pid-file = /data/mysql/mysqld.pid

2.1.2 slave從庫配置文件
192.168.2.129機器 當主故障時,slave會提升為新主
與192.168.1.128master主庫的mysql配置文件my.cnf中參數不同的是
129機器上開啟以下2個參數
read_only=1
relay_log_purge=0
同時server-id的值必須和128主庫不一樣

192.168.2.130機器
提升mysql只讀,以及數據備份,已經monitor監控

與192.168.1.128master主庫的mysql配置文件my.cnf中參數不同的是
129機器上開啟以下2個參數
read_only=1
relay_log_purge=0
同時server-id的值必須和128主庫和129從庫不一樣

2.1.3參數詳細介紹
1)各從庫應設置relay_log_purge=0
否則收到以下告警信息mysql -e ‘set global relay_log_purge=0‘ 動態修改該參數,因為隨時slave會提升為master。

2)各從庫設置read_only=1
否則收到以下告警信息 mysql -e ‘set global read_only=1‘ 動態修改該參數,因為隨時slave會提升為master

2.1.4主從復制操作
1)主庫創建復制帳號
主庫操作
server02主庫操作:
mysqldump -uroot -p123456 --master-data=2 --single-transaction -R --triggers -A > /root/all.sql
其中--master-data=2代表備份時刻記錄master的Binlog位置和Position,--single-transaction意思是獲取一致性快照,-R意思是備份存儲過程和函數,--triggres的意思是備份觸發器,-A代表備份所有的庫。

grant replication slave on *.* to ‘repmha‘@‘192.168.2.%‘ identified by ‘123456‘;
flush privilegs;

scp -rp /root/all.sql [email protected]:/root
scp -rp /root/all.sql [email protected]:/root

從庫操作:
slave03 和slave04 2個從庫上操作:

mysql -uroot -p123456</root/all.sql
CHANGE MASTER TO
MASTER_HOST=‘192.168.2.128‘,
MASTER_PORT=3306,
MASTER_USER=‘repmha‘,
MASTER_PASSWORD=‘123456‘;
start slave;
show slave status\G

主從復制環境到此配置完成

2.2 創建MySQL管理用戶monitor

2.2.1 刪除多余用戶

mysql> drop user root@‘localhost‘;
mysql> select user,host from mysql.user;
+------+--------------+
| user | host         |
+------+--------------+
| root | 127.0.0.1    |
| rep  | 192.168.10.% |
+------+--------------+
3 rows in set (0.00 sec

2.2.2 創建監控monitor管理帳號

mysql> grant all on *.* to monitor@‘192.168.10.%‘ identified by ‘123456‘; 
Query OK, 0 rows affected (0.01 sec)

之前測試MHA主從同步報錯,就是數據庫授權管理問題,不是使用的localhost進行管理數據庫,而是使用manager管理內網網段進行檢查同步。

2.3.兩臺slave服務器設置read_only

(從庫對外提供讀服務,之所以沒有寫進配置文件,是因為隨時slave會提升為master)

192.168.2.129 [root ~]$ mysql -uroot -p123456 -e "set global read_only=1"
192.168.2.130 [root ~]$ mysql -uroot -p123456 -e "set global read_only=1"

2.4.配置SSH登錄無密碼驗證

(使用key登錄,工作中常用,最好不要禁掉密碼登錄,如果禁了,可能會有問題)

在server02 192.168.2.128操作(Master):
192.168.2.128 [root ~]$ ssh-keygen -t rsa
192.168.2.128 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
192.168.2.128 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

在server03 192.168.2.129操作(slave):
192.168.2.129 [root ~]$ ssh-keygen -t rsa
192.168.2.129 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
192.168.2.129 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

在server04 192.168.2.130操作(slave+Monitor):
192.168.2.130 [root ~]$ ssh-keygen -t rsa
192.168.2.130 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
192.168.2.130 [root ~]$ ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]

2.5.安裝MHA過程

2.5.1創建軟件包服務器存放目錄

mkdir /data/tools -p
cd tools/
rz mha4mysql-node-0.56-0.el6.noarch.rpm
rz mha4mysql-manager-0.56-0.el6.noarch.rpm

在所有的節點安裝MHA node:(下面以server02為例,記得server03和server04也一樣的操作),MHA node和MHA Manager都在要官網下載,
下載地址:https://code.google.com/p/mysql-master-ha/wiki/Downloads?tm=2(自備×××)

2.5.2 安裝MHA Node
在所有的服務器上安裝

yum install -y perl-DBD-MySQL
rpm -ihv mha4mysql-node-0.56-0.el6.noarch.rpm

2.5.3. 安裝MHA Manager
在192.168.2.130 機器上安裝mha4mysql-manager和 mha4mysql-node

yum -y install perl-Parallel-ForkManager  perl-Log-Dispatch  perl-Time-HiRes perl-Mail-Sender perl-Mail-Sendmail perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Config-IniFiles
rpm -ihv mha4mysql-node-0.56-0.el6.noarch.rpm
rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm

此處為了演示只是以192.168.2.130機器上安裝 mha4mysql-node和mha4mysql-manager演示為例
A.采用rpm包安裝

[root@server03 ~]# rpm -ivh mha4mysql-node-0.56-0.el6.noarch
安裝成功後會出現以下軟件包,代表安裝成功
[root@server03 ~]# rpm -ql mha4mysql-node-0.56-0.el6.noarch
/usr/bin/apply_diff_relay_logs
/usr/bin/filter_mysqlbinlog
/usr/bin/purge_relay_logs
/usr/bin/save_binary_logs
/usr/share/man/man1/apply_diff_relay_logs.1.gz
/usr/share/man/man1/filter_mysqlbinlog.1.gz
/usr/share/man/man1/purge_relay_logs.1.gz
/usr/share/man/man1/save_binary_logs.1.gz
/usr/share/perl5/vendor_perl/MHA/BinlogHeaderParser.pm
/usr/share/perl5/vendor_perl/MHA/BinlogManager.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFindManager.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinder.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinderElp.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinderXid.pm
/usr/share/perl5/vendor_perl/MHA/NodeConst.pm
/usr/share/perl5/vendor_perl/MHA/NodeUtil.pm
/usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm

Node腳本說明:(這些工具通常由MHA Manager的腳本觸發,無需人為操作)
save_binary_logs //保存和復制master的二進制日誌
apply_diff_relay_logs //識別差異的中繼日誌事件並將其差異的事件應用於其他的slave
filter_mysqlbinlog //去除不必要的ROLLBACK事件(MHA已不再使用這個工具)
purge_relay_logs //清除中繼日誌(不會阻塞SQL線程)
剛才的四個命令是在安裝mha4mysql-node-0.56-0.el6.noarch.rpm 生成的

[root@server03 ~]#  rpm -ql mha4mysql-manager-0.56-0.el6.noarch
package mha4mysql-manager-0.56-0.el6.noarch is not installed
[root@server03 ~]# rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm 
Preparing...                ########################################### [100%]
   1:mha4mysql-manager      ########################################### [100%]
[root@server03 ~]# 

安裝成功後會出現以下軟件包,代表安裝成功

[root@server03 ~]# rpm -ql mha4mysql-manager-0.56-0.el6.noarch
/usr/bin/masterha_check_repl
/usr/bin/masterha_check_ssh
/usr/bin/masterha_check_status
/usr/bin/masterha_conf_host
/usr/bin/masterha_manager
/usr/bin/masterha_master_monitor
/usr/bin/masterha_master_switch
/usr/bin/masterha_secondary_check
/usr/bin/masterha_stop
/usr/share/man/man1/masterha_check_repl.1.gz
/usr/share/man/man1/masterha_check_ssh.1.gz
/usr/share/man/man1/masterha_check_status.1.gz
/usr/share/man/man1/masterha_conf_host.1.gz
/usr/share/man/man1/masterha_manager.1.gz
/usr/share/man/man1/masterha_master_monitor.1.gz
/usr/share/man/man1/masterha_master_switch.1.gz
/usr/share/man/man1/masterha_secondary_check.1.gz
/usr/share/man/man1/masterha_stop.1.gz
/usr/share/perl5/vendor_perl/MHA/Config.pm
/usr/share/perl5/vendor_perl/MHA/DBHelper.pm
/usr/share/perl5/vendor_perl/MHA/FileStatus.pm
/usr/share/perl5/vendor_perl/MHA/HealthCheck.pm
/usr/share/perl5/vendor_perl/MHA/ManagerAdmin.pm
/usr/share/perl5/vendor_perl/MHA/ManagerAdminWrapper.pm
/usr/share/perl5/vendor_perl/MHA/ManagerConst.pm
/usr/share/perl5/vendor_perl/MHA/ManagerUtil.pm
/usr/share/perl5/vendor_perl/MHA/MasterFailover.pm
/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm
/usr/share/perl5/vendor_perl/MHA/MasterRotate.pm
/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm
/usr/share/perl5/vendor_perl/MHA/Server.pm
/usr/share/perl5/vendor_perl/MHA/ServerManager.pm

復制下面的腳本到/usr/local/bin目錄

/usr/bin/masterha_check_repl
/usr/bin/masterha_check_ssh
/usr/bin/masterha_check_status
/usr/bin/masterha_conf_host
/usr/bin/masterha_manager
/usr/bin/masterha_master_monitor
/usr/bin/masterha_master_switch
/usr/bin/masterha_secondary_check
/usr/bin/masterha_stop

復制相關腳本到/usr/local/bin目錄(軟件包解壓縮後就有了,不是必須,因為這些腳本不完整,需要自己修改,這是軟件開發著留給我們自己發揮的,如果開啟下面的任何一個腳本對應的參數,而對應這裏的腳本又沒有修改則會拋錯,自己被坑的很慘

master_ip_failover  //自動切換時vip管理的腳本,不是必須,如果我們使用keepalived的,我們可以自己編寫腳本完成對vip的管理,比如監控mysql,如果mysql異常,我們停止keepalived就行,這樣vip就會自動漂移                                          
master_ip_online_change  //在線切換時vip的管理,不是必須,同樣可以可以自行編寫簡單的shell完成
power_manager   //故障發生後關閉主機的腳本,不是必須
send_report    //因故障切換後發送報警的腳本,不是必須,可自行編寫簡單的shell完成

到這裏整個MHA集群環境已經搭建完畢

MySQL之MHA+keepalived方案演示(一)