1. 程式人生 > >Docker Mysql 主從複製+讀寫分離

Docker Mysql 主從複製+讀寫分離

整體規劃

host ----------- ip--------------------port---------------------說明
-----------------192.168.100.30 10088 Keepalived-VIP
Proxysql-21 192.168.100.21 6032,6033 ProxySQL代理
Proxysql-22 192.168.100.22 6032,6033 ProxySQL代理
mysql-master 192.168.100.11 3306,33060-mgr埠 mysql主節點
mysql-slave1 192.168.100.12 3306,33060-mgr埠 mysql從節點
mysql-slave2 192.168.100.13 3306,33060-mgr埠 mysql從節點

基本流程
配置MGR
Step1 給3個mysql容器編寫配置檔案
Step2 建立3個mysql容器
Step3 配置master節點
Step4 配置slave節點
配置ProxySQL 叢集
Step5 配置節點帳號
Step6 安裝ProxySQL
Step7 配置第二臺ProxySQL
配置keepalived
Step8 兩臺ProxySQL配置SSH免密碼認證
Step9 配置100.21上的keepalived
Step10 配置100.23上的keepalived
配置宿主機埠轉發
Step11 在主機配置埠轉發

配置MGR
Step1 給3個mysql容器編寫配置檔案
#mkdir -p /opt/mysql/{master,slave1,slave2}
Master的my.cnf:
#vi /opt/mysql/master/conf/my.cnf

[client]    
default-character-set=utf8  
[mysql]    
default-character-set=utf8
[mysqld]
skip-name-resolve
character-set-server=utf8
collation-server=utf8_general_ci
log_bin=binlog
binlog_format=row
relay-log=bogn-relay-bin
innodb_buffer_pool_size=2048M
innodb_log_file_size=128M
query_cache_size=64M
max_connections=128
max_allowed_packet = 50M
log_timestamps=SYSTEM
symbolic-links=0

server-id=1
pid-file=/var/run/mysqld/mysqld.pid

#for use gtid
gtid_mode=on
enforce_gtid_consistency=on
log_slave_updates=on

 #add for group replication
master_info_repository=table
relay_log_info_repository=table
binlog_checksum=none
transaction_write_set_extraction=XXHASH64  #唯一確定事務影響行的主鍵,必須開啟
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"  #唯一標識一個組
loose-group_replication_allow_local_disjoint_gtids_join=on
#重啟自動組複製,第一次初始化的時候可以off,不然進去初始化之前需要先關閉stop group_replication
loose-group_replication_start_on_boot=on
loose-group_replication_unreachable_majority_timeout=10  #unreachable的超時時間設定
loose-group_replication_local_address="192.168.100.11:33060"  #用於組間通訊的地址
#donor地址
loose-group_replication_group_seeds="192.168.100.11:33060,192.168.100.12:33060,192.168.100.13:33060"
loose-group_replication_bootstrap_group=off  #引導節點設定

[mysqld_safe]
default-character-set=utf8

Slave1的my.cnf:
#vi /opt/mysql/slave1/conf/my.cnf,只有下面兩行與Master不同,其它相同:
server-id=2
loose-group_replication_local_address= “192.168.100.12:33060”

Slave2的my.cnf:
#vi /opt/mysql/slave2/conf/my.cnf,只有下面兩行與Master不同,其它相同:
server-id=3
loose-group_replication_local_address= “192.168.100.13:33060”

Step2 建立3個mysql容器
在docker建立容器之前,先建立一個網路:
docker network create --subnet=192.168.100.0/24 mysqlnet
再用docker建立3個mysql容器:

docker run -d -restart=always -p 10001:3306 -p 10011:33060 --name master --hostname=mysql-master --net=mysqlnet --ip=192.168.100.11 --add-host mysql-master:192.168.100.11 --add-host mysql-slave1:192.168.100.12 --add-host mysql-slave2:192.168.100.13 -v /opt/mysql/master/conf/my.cnf:/etc/mysql/conf.d/my.cnf -v /opt/mysql/master/logs:/var/log/mysql -v /opt/mysql/master/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7.23
docker run -d -restart=always -p 10002:3306 -p 10012:33060 --name slave1 --hostname=mysql-slave1 --net=mysqlnet --ip=192.168.100.12 --add-host mysql-master:192.168.100.11 --add-host mysql-slave1:192.168.100.12 --add-host mysql-slave2:192.168.100.13 -v /opt/mysql/slave1/conf/my.cnf:/etc/mysql/conf.d/my.cnf -v /opt/mysql/slave1/logs:/var/log/mysql -v /opt/mysql/slave1/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7.23
docker run -d -restart=always -p 10003:3306 -p 10013:33060 --name slave2 --hostname=mysql-slave2 --net=mysqlnet --ip=192.168.100.13 --add-host mysql-master:192.168.100.11 --add-host mysql-slave1:192.168.100.12 --add-host mysql-slave2:192.168.100.13 -v /opt/mysql/slave2/conf/my.cnf:/etc/mysql/conf.d/my.cnf -v /opt/mysql/slave2/logs:/var/log/mysql -v /opt/mysql/slave2/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7.23

Step3 配置master節點
#docker exec -it master /bin/bash
第一步:建立用於複製的使用者
[email protected]:/# mysql -uroot -p123456
mysql> set sql_log_bin=0;
mysql> grant replication slave,replication client on . to ‘mgruser’@’%’ identified by ‘123456’;
mysql> grant replication slave,replication client on . to ‘mgruser’@‘127.0.0.1’ identified by ‘123456’;
mysql> grant replication slave,replication client on . to ‘mgruser’@‘localhost’ identified by ‘123456’;
mysql> set sql_log_bin=1;
mysql> flush privileges;
第二步:配置複製所使用的使用者
mysql> CHANGE MASTER TO MASTER_USER=‘mgruser’, MASTER_PASSWORD=‘123456’ FOR CHANNEL ‘group_replication_recovery’;
第三步:安裝mysql group replication 這個外掛
mysql> install plugin group_replication soname ‘group_replication.so’; 
第四步:初始化一個複製組
mysql> set global group_replication_bootstrap_group=on;
mysql> start group_replication;
mysql> set global group_replication_bootstrap_group=off;
第一次啟動節點後,需要安裝一個group_replicaiton外掛 . 以及配置複製,進行初始配置。每個節點都需要執行前三步。
配置完成後,啟動叢集中的第一個節點的group_replication時,需要設定boostrap引數。 其他的節點直接使用START GROUP_REPLICATION;即可。
整個叢集配置完畢之後,當某個節點宕機之後,使用常規的mysql啟動命令進行重啟即可,無需再手動配置,節點重啟後將自動加入複製叢集。

Step4 配置slave節點
步驟的前三步同上面master一樣,最後一步只需要start group_replication;
成功之後再檢視一下當前的組員
mysql> SELECT * FROM performance_schema.replication_group_members;

至此,一主二從同步配置完畢。測試當關閉master時,從節點是否接管主節點。
在從節點上檢視當前主節點是誰:
mysql> show global status like ‘group_replication_primary_member’;

關閉主節點:
#docker stop master
從節點上再次檢視主節點是誰:
mysql> show global status like ‘group_replication_primary_member’;

現在主節點是slave1了。
原主節點加入叢集:
#docker stop master
此時可看到叢集成員包含三臺,並且可以看到主節點仍是slave1。
如果原主節點沒加入叢集,再嘗試執行start group_replication;

當重啟docker服務或主機重啟後,如果這些節點沒有加入複製組,先在主節點上執行:
mysql> set global group_replication_bootstrap_group=on;
mysql> start group_replication;
mysql> set global group_replication_bootstrap_group=off;
然後在從節點上執行:
mysql> start group_replication;

配置ProxySQL 叢集
在以前的ProxySQL版本中,要支援MySQL組複製(MGR,MySQL Group Replication)需要藉助第三方指令碼對組複製做健康檢查並自動調整配置,但是從ProxySQL v1.4.0開始,已原生支援MySQL組複製的代理,在main庫中也已提供mysql_group_replication_hostgroups表來控制組複製叢集中的讀、寫組。
儘管已原生支援MGR,但仍然需要在MGR節點中建立一張額外的系統檢視sys.gr_member_routing_candidate_status為ProxySQL提供監控指標。建立該檢視的指令碼在github.com可以找到。

Step5 配置節點帳號
在後端資料庫上用show global status like ‘group_replication_primary_member’;確認一下哪一臺是寫節點,然後在寫節點上建立程式帳號和監控帳號(之前停止過寫節點master,所以現在寫節點變成slave1):
第一步,在寫節點建立檢查MGR節點狀態的函式和檢視:
#cd /opt/mysql/slave1/data
#vi addition_to_sys.sql,把https://github.com/lefred/mysql_gr_routing_check/blob/master/addition_to_sys.sql這個頁面中的sql程式碼複製下來。
#docker exec -it slave1 /bin/bash
[email protected]:/# mysql -uroot -p < /var/lib/mysql/addition_to_sys.sql

第二步,在寫節點上建立帳號
[email protected]:/# mysql -uroot -p
mysql> grant select on . to ‘monitor’@’%’ identified by ‘monitor’;
mysql> grant all privileges on . to ‘proxysql’@’%’ identified by ‘proxysql’;
mysql> flush privileges;

Step6 安裝ProxySQL
這裡沒有找到ProxySQL的官方映象,於是使用了一個centos的映象,在centos的映象裡面再安裝ProxySQL
第一步,安裝centos的映象
#mkdir -p /opt/{proxysql-1,proxysql-2}/home,用來存放keepalived檢測指令碼

docker run -d --restart=always --privileged --name proxysql-21 --hostname=proxysql-21 --net=mysqlnet --ip=192.168.100.21 --add-host mysql-proxy-22:192.168.100.22 -v /opt/proxysql-1/home:/home centos /usr/sbin/init
docker run -d --restart=always --privileged --name proxysql-22 --hostname=proxysql-22 --net=mysqlnet --ip=192.168.100.22 --add-host mysql-proxy-21:192.168.100.21 -v /opt/proxysql-2/home:/home centos /usr/sbin/init

這裡並未對映埠,因為對映的是VIP-192.168.100.30的6033埠,不能繫結到ProxySQL主機中,需要單獨新增到iptables中。

第二步,安裝mysql客戶端
#docker exec -it proxysql-21 /bin/bash
[[email protected] /]# yum install -y https://repo.mysql.com/mysql57-community-release-el7.rpm
[[email protected] /]# yum install mysql -y

第三步,安裝ProxySQL
根據http://repo.proxysql.com/的文件安裝
[[email protected] /]# vi /etc/yum.repos.d/proxysql.repo
[proxysql_repo]
name= ProxySQL YUM repository
baseurl=http://repo.proxysql.com/ProxySQL/proxysql-1.4.x/centos/$releasever
gpgcheck=1
gpgkey=http://repo.proxysql.com/ProxySQL/repo_pub_key
安裝:
[[email protected] /]# yum install -y proxysql
先別啟動proxysql,寫點叢集有關配置到/etc/proxysql.cnf比在資料庫看insert方便。
[[email protected] /]# vi /etc/proxysql.cnf

datadir="/var/lib/proxysql"

admin_variables=
{
        admin_credentials="admin:admin;cluster:cluster"
        mysql_ifaces="0.0.0.0:6032"
        cluster_username="cluster"
        cluster_password="cluster"
        cluster_check_interval_ms=200
        cluster_check_status_frequency=100
        cluster_mysql_query_rules_save_to_disk=true
        cluster_mysql_servers_save_to_disk=true
        cluster_mysql_users_save_to_disk=true
        cluster_proxysql_servers_save_to_disk=true
        cluster_mysql_query_rules_diffs_before_sync=3
        cluster_mysql_servers_diffs_before_sync=3
        cluster_mysql_users_diffs_before_sync=3
        cluster_proxysql_servers_diffs_before_sync=3
}

mysql_variables=
{
        threads=4
        max_connections=2048
        default_query_delay=0
        default_query_timeout=36000000
        have_compress=true
        poll_timeout=2000
        interfaces="0.0.0.0:6033"
        default_schema="information_schema"
        stacksize=1048576
        server_version="5.5.30"
        connect_timeout_server=3000
        monitor_username="monitor"
        monitor_password="monitor"
        monitor_history=600000
        monitor_connect_interval=60000
        monitor_ping_interval=10000
        monitor_read_only_interval=1500
        monitor_read_only_timeout=500
        ping_interval_server_msec=120000
        ping_timeout_server=500
        commands_stats=true
        sessions_sort=true
        connect_retries_on_failure=10
}

proxysql_servers =
(
        {
                hostname="192.168.100.21"
                port=6032
                comment="primary"
        },
        {
                hostname="192.168.100.22"
                port=6032
                comment="secondary"
        }
)

mysql_servers =
(
        {
                address = "192.168.100.11"      # no default, required . 
                port = 3306                     # no default, required . 
                hostgroup = 1                   # no default, required
                status = "ONLINE"               # default: ONLINE
                weight = 1                      # default: 1
                compression = 0                 # default: 0
                max_replication_lag = 10        # default 0 .
        },
        {
                address = "192.168.100.12"      # no default, required .
                port = 3306                     # no default, required .
                hostgroup = 3                   # no default, required
                status = "ONLINE"               # default: ONLINE
                weight = 1                      # default: 1
                compression = 0                 # default: 0
                max_replication_lag = 10        # default 0 .
        },
        {
                address = "192.168.100.13"      # no default, required .
                port = 3306                     # no default, required . 
                hostgroup = 3                   # no default, required
                status = "ONLINE"               # default: ONLINE
                weight = 1                      # default: 1
                compression = 0                 # default: 0
                max_replication_lag = 10        # default 0 . 
        }
)
mysql_users:
(
        {
                username = "proxysql" # no default , required
                password = "proxysql" # default: ''
                default_hostgroup = 1 # default: 0
                active = 1            # default: 1
        }
)



#defines MySQL Query Rules
mysql_query_rules:
(
        {
                rule_id=1
                active=1
                match_pattern="^SELECT .* FOR UPDATE$"
                destination_hostgroup=0
                apply=1
        },
        {
                rule_id=2
                active=1
                match_pattern="^SELECT"
                destination_hostgroup=3
                apply=1
        }
)

scheduler=()
mysql_replication_hostgroups=()

啟動proxysql:
[[email protected] /] systemctl start proxysql
新增mysql group replicatio資訊(在/etc/proxysql.cnf新增會報語法錯)
#mysql -uadmin -padmin -h127.0.0.1 -P6032

insert into mysql_group_replication_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) values (1,2,3,4,1,1,0,100);
load mysql servers to runtime;
save mysql servers to disk;

至此,ProxySQL-21配置完成,可以在某個客戶端使用:
mysql -uproxysql -pproxysql -h192.168.100.21 -P6033 進入資料庫,更新一條記錄,然後檢視讀寫是否分離:
在這裡插入圖片描述
檢視執行時後端資料庫分組情況:
在這裡插入圖片描述
關閉寫節點容器,再檢視執行時後端資料庫分組情況:
在這裡插入圖片描述
可見,原寫節點放在hostgroup_id為4的離線主機組了,而讀節點192.168.100.12提升到寫節點了。再一次使用mysql -uproxysql -pproxysql -h192.168.100.21 -P6033 進入資料庫,更新一條記錄,然後檢視讀寫是否分離正確。
啟動原寫節點容器,再檢視執行時後端資料庫分組情況:
在這裡插入圖片描述
可見原寫節點192.168.100.11加了叢集,不過是在讀組中,不會搶佔成為寫節點。

Step7 配置第二臺ProxySQL

同樣的方法,配置完ProxySQL-22,查詢,更新資料庫,檢查讀寫分離是否正確。兩臺ProxySQL配置好後,接下來再做成Keepalived叢集。

配置keepalived
檢視宿主機中是否載入了ip_vs和xt_set模組,沒有就要載入,否則容器中啟動keepalived會失敗,報錯如下:
IPVS: Can’t initialize ipvs: Protocol not available
Unable to load module xt_set - not using ipsets

#lsmod | grep ip_vs
#lsmod | grep xt_set
如果沒有顯示結果,則要如下處理:
#vi /etc/sysconfig/modules/ip_vs.modules

#!/bin/sh
/sbin/modinfo -F filename ip_vs > /dev/null 2>&1
if [ $? -eq 0 ]; then
    /sbin/modprobe ip_vs
fi

#chmod 755 /etc/sysconfig/modules/ip_vs.modules

#vi /etc/sysconfig/modules/xt_set.modules

#!/bin/sh
/sbin/modinfo -F filename xt_set > /dev/null 2>&1
if [ $? -eq 0 ]; then
    /sbin/modprobe xt_set
fi

#chmod 755 /etc/sysconfig/modules/xt_set.modules

#reboot
檢查:
#lsmod | grep ip_vs
#lsmod | grep xt_set

如果是RHEL6/CentOS6,則echo “modprobe ip_vs” >> /etc/rc.sysinit
此外,還需要配置# echo ‘net.ipv4.ip_nonlocal_bind=1’ >> /etc/sysctl.conf

Step8 兩臺ProxySQL配置SSH免密碼認證
因為檢測指令碼要用ssh去連線對方判斷對方的keepalived服務是否處於avtived狀態,所以要配置SSH服務及免密碼登入。

第一步:開啟SSH服務
#docker exec -it proxysql-21 /bin/bash
[[email protected] /]# mkdir /var/run/sshd/
[[email protected] /]# yum install -y openssh-server passwd
[[email protected] /]# echo “123456” | passwd --stdin root
以下建立公私金鑰,輸入命令後,直接按兩次enter鍵確認就行了
[[email protected] /]# ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
[[email protected] /]# ssh-keygen -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key
[[email protected] /]# ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key
[[email protected] /]# sed -i “s/UsePAM yes/UsePAM no/g” /etc/ssh/sshd_config
[[email protected] /]# systemctl start sshd

#docker exec -it proxysql-22 /bin/bash
配置完全同proxysql-21

第二步,配置免金鑰認證
proxysql-21:
[[email protected] /]# ssh-keygen
[[email protected] /]# ssh-copy-id 192.168.100.22
[email protected]’s password: 123456

proxysql-22:
[[email protected] /]# ssh-keygen
[[email protected] /]# ssh-copy-id 192.168.100.21
[email protected]’s password: 123456

Step9 配置100.21上的keepalived
[[email protected] /]# yum install -y iproute keepalived
[[email protected] /]# vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL_21
   #  vrrp_strict    //註釋掉,否則VIP拼不通
}

vrrp_script chk_proxysql
{  //此 { 要單獨一行
    script "/home/check_proxysql.sh"
    interval 1
}

vrrp_instance proxysql {
    state BACKUP
    interface eth0
    virtual_router_id 55
    priority 100
    advert_int 1
    nopreempt        //非搶佔,結合state BACKUP一起用
    authentication {
        auth_type PASS
        auth_pass XXXX
    }

    track_script {
        chk_proxysql
    }

    virtual_ipaddress {
        192.168.100.30/24
    }
}

/home/check_proxysql.sh指令碼內容如下:

#!/bin/sh
vip='192.168.100.30'
ip a | grep $vip
if [ $? -ne 0 ];then
    exit 0
fi
peer_ip='192.168.100.22'
peer_port=22
proxysql='proxysql-21'
log=/home/keepalived.log
alias date='date +"%y-%m-%d_%H:%M:%S"'
#echo "`date`  enter script." >> $log
#check if this keepalived is MASTER
#echo "`date`  after check keepalived master script." >> $log
#check if data port(6033) is alive
data_port_stats=$(timeout 2  bash -c 'cat < /dev/null > /dev/tcp/0.0.0.0/6033' &> /dev/null;echo $?)
if [ $data_port_stats -eq 0 ];then
    exit 0
else
    #check if the other keepalived is running
    peer_keepalived=$(ssh -p$peer_port $peer_ip 'systemctl is-active keepalived.service')
    if [ "$peer_keepalived" != "active" ];then
        echo "`date`  data port:6033 of $proxysql is not available, but the BACKUP keepalived is not running, so can't do the failover" >> $log
    else
        echo "`date`  data port:6033 of $proxysql is not available, now SHUTDOWN keepalived." >> $log
        systemctl stop keepalived.service
    fi
fi

[[email protected] /]# chmod +x /home/check_proxysql.sh
[[email protected] /]# systemctl enable keepalived
[[email protected] /]# systemctl start keepalived
linux 裝置裡面有個比較特殊的檔案:/dev/[tcp|upd]/host/port,只要讀取或者寫入這個檔案,相當於系統會嘗試連線:host這臺機器,對應port埠。如果主機以及埠存在,就建立一個socket 連線。/dev/[tcp|upd]這個目錄實際不存在,它是特殊裝置檔案。

Step10 配置100.22上的keepalived
#docker exec -it proxysql-22 /bin/bash
[[email protected] /]# yum install -y iproute openssh-clients keepalived
[[email protected] /]# vi /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL_22
   #  vrrp_strict    //註釋掉,否則VIP拼不通
}

vrrp_script chk_proxysql 
{  //獨佔一行
    script "/home/check_proxysql.sh"
    interval 1
}

vrrp_instance proxysql {
    state BACKUP
    interface eth0
    virtual_router_id 55
    priority 90
    advert_int 1
    nopreempt
    authentication {
        auth_type PASS
        auth_pass XXXX
    }

    track_script {
        chk_proxysql
    }

    virtual_ipaddress {
        192.168.100.30/24
    }
}

/home/check_proxysql.sh

vip='192.168.100.30'
ip a | grep $vip
if [ $? -ne 0 ];then
    exit 0
fi
peer_ip='192.168.100.21'
peer_port=22
proxysql='proxysql-22'
log=/home/keepalived.log
alias date='date +"%y-%m-%d_%H:%M:%S"'
#echo "`date`  enter script." >> $log
#check if this keepalived is MASTER
#echo "`date`  after check keepalived master script." >> $log
#check if data port(6033) is alive
data_port_stats=$(timeout 2  bash -c 'cat < /dev/null > /dev/tcp/0.0.0.0/6033' &> /dev/null;echo $?)
if [ $data_port_stats -eq 0 ];then
    exit 0
else
    #check if the other keepalived is running
    peer_keepalived=$(ssh -p$peer_port $peer_ip 'systemctl is-active keepalived.service')
    if [ "$peer_keepalived" != "active" ];then
        echo "`date`  data port:6033 of $proxysql is not available, but the BACKUP keepalived is not running, so can't do the failover" >> $log
    else
        echo "`date`  data port:6033 of $proxysql is not available, now SHUTDOWN keepalived." >> $log
        systemctl stop keepalived.service
    fi
fi

[[email protected] /]# chmod +x /home/check_proxysql.sh
[[email protected] /]# systemctl enable keepalived
[[email protected] /]# systemctl start keepalived

配置宿主機埠轉發
Step11 在宿主機配置埠轉發

#docker network ls | grep mysqlnet
6a9cfeb216f9 mysqlnet bridge local

#brctl show,檢視有哪些網橋裝置
bridge name bridge id STP enabled interfaces
br-6a9cfeb216f9 8000.024233a036ff no veth78b39b9
veth942efcb
vethe5fb8b6
vethe86e1c7
vethffd913e
br0 8000.08606e577285 no enp3s0
docker0 8000.0242115d2745 no
#ifconfig br-6a9cfeb216f9,可以看到網段是192.168.100.0/24。
#iptables -t nat -A DOCKER ! -i br-6a9cfeb216f9 -p tcp -m tcp --dport 10088 -j DNAT --to-destination 192.168.100.30:6033
#iptables -t nat -nL | grep 6033

最後在Mysql客戶端連線ip:10088進行測試,使用者名稱密碼均為proxysql。

參考:
https://blog.csdn.net/yingziisme/article/details/83044146