1. 程式人生 > >MMM架構實現MySQL高可用讀寫分離(進階版,包含Amoeba)

MMM架構實現MySQL高可用讀寫分離(進階版,包含Amoeba)

meid _id status mysqld 服務無法啟動 flush 忽略 kit pri

前兩天逛博客偶然發現了某大神寫的關於MMM的文章,看完就迫不及待的自己試了一下,大神寫的很順暢,以為自己也能操作的很順暢,但是實際情況是手腳太不麻利,碰到很多坑,如果大神的文章是“平地起高樓”,那我這篇則是“平房起別墅”,因為我生產環境中已經有兩臺Mysql做雙主同步,這篇文章說的是如何改造升級現有架構!


環境介紹:

IP

操作系統

數據庫

軟件

角色/VIP

192.168.6.109

CentOS7.2

Mysql5.6-34

mysql-mmm2.2.1-15

Monitor/Amoeba

192.168.6.107

CentOS7.2

Mysql5.6-34

mysql-mmm2.2.1-15

slave1/192.168.6.36

192.168.6.108

CentOS6.5

Mysql5.6-34

mysql-mmm2.2.1-3

slave2/192.168.6.37

192.168.6.30

CentOS6.5

Mysql5.6-34

mysql-mmm2.2.1-3

master1/192.168.6.35

192.168.6.31

CentOS6.5

Mysql5.6-34

mysql-mmm2.2.1-3

master2

開始部署

第一部分MMM部署安裝

一.安裝mysql5.6-34(雙主雙從都進行安裝)

1.yum -y install gcc perl lua-devel pcre-devel openssl-devel gd-devel gcc-c++ ncurses-devel libaio autoconf

2.tar xvf mysql-5.6.34-linux-glibc2.5-x86_64.tar.gz

3.useradd mysql

4.mv mysql-5.6.34-linux-glibc2.5-x86_64 /data/mysql && chown -R mysql.mysql /data/mysql/

5.vim my.cnf(4臺服務器server_id不同)

[mysqld]
socket    = /data/mysql/mysql.sock
pid-file  = /data/mysql/mysql.pid
basedir   = /data/mysql
datadir   = /data/mysql/data
tmpdir    = /data/mysql/data
log_error = error.log
relay_log = relay.log
binlog-ignore-db=mysql,information_schema
character_set_server=utf8
log_bin=mysql_bin
server_id=1
log_slave_updates=true
sync_binlog=1
auto_increment_increment=2
auto_increment_offset=1

6.cd /data/mysql/ && scripts/mysql_install_db --user=mysql --basedir=/data/mysql --datadir=/data/mysql/data/

7.vim /root/.bash_profile 修改 PATH=$PATH:$HOME/bin 為:

PATH=$PATH:$HOME/bin:/data/mysql/bin:/data/mysql/lib

8.source /root/.bash_profile

9.cp support-files/mysql.server /etc/init.d/mysql

10.service mysql start


二.實現雙主復制

因為我的環境已經是雙主復制,這裏只把實現的命令寫出來

1.在192.168.6.30(master1)上執行

grant replication slave on *.* to 'replication'@'192.168.6.%' identified by '123456'; 
show master status;
change master to master_host='192.168.6.31',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=330;
start slave;
show slave status\G;

2.,在192.168.6.31(master2)上執行

grant replication slave on *.* to 'replication'@'192.168.6.%' identified by '123456'; 
show master status;
change master to master_host='192.168.6.30',master_user='replication',master_password='123456',master_log_file='mysql_bin.000001',master_log_pos=334;
start slave;
show slave status\G;

技術分享圖片觀察是否是兩個YES,如果是則雙主同步成功


三.如何加入正在運行的mysql雙主復制(不停服務加slave復制)

其實平時能讓運維從0搭建一套Mysql集群是很少見的,大都是現成的環境然後進行升級,裏面有數據,並且有重要業務不能停服務,筆者在做實驗的時候以為可以直接加入,結果每秒都在變化的POS根本不能直接加入進去,於是上網查了一些資料,找到了不停服務也能加入的方法。

1.導出全庫,將binlog點和pos寫到文件中

mysqldump --skip-lock-tables --single-transaction --master-data=2 -A > ~/dump.sql

--master-data:默認等於1,將dump起始(change master to)binlog點和pos值寫到結果中,等於2是將change master to寫到結果中並註釋。-A就是備份全庫

--single_transaction:導出開始時設置事務隔離狀態,並使用一致性快照開始事務,然後unlock tables;而lock-tables是鎖住一張表不能寫操作,直到dump完畢。

2.查詢裏面的POS值和binlog值

head dump.sql -n80 | grep "MASTER_LOG_POS"

CHANGE MASTER TO MASTER_LOG_FILE='mysql_bin.000149', MASTER_LOG_POS=120;

3.將文件進行壓縮

gzip ~/dump.sql


4.拷貝文件到兩個從服務器,解壓後導入

scp dump.gz 192.168.6.107:/root
scp dump.gz 192.168.6.108:/root
gunzip dump.gz
mysql < dump.sql


5.加入兩臺Slave服務器

在192.168.6.107(slave1)上執行:

change master to master_host='192.168.6.30',master_user='replication',master_password='123456',master_log_file='mysql_bin.000149',master_log_pos=120;
start slave;
show slave status\G;

在192.168.6.108(slave2)上執行:

change master to master_host='192.168.6.30',master_user='replication',master_password='123456',master_log_file='mysql_bin.000149',master_log_pos=120;
start slave;
show slave status\G;

技術分享圖片觀察是否是兩個YES,如果是則主從同步成功


四.安裝部署MMM服務器

1.安裝mmm軟件包(4臺都裝)

yum -y install mysql-mmm*        #Centos7和6安裝的包數量不一致,這個可以暫時忽略不計

2.在Monitor端編輯mysql-mmm配置文件

vim /etc/mysql-mmm/mmm_common.conf

active_master_role      writer

<host default>
    cluster_interface       ens160        #Centos6的網卡名稱為eth0
    pid_path                /run/mysql-mmm-agent.pid        #Centos6的路徑為/var/run/mysql-mmm-agent.pid
    bin_path                /usr/libexec/mysql-mmm/
    replication_user        replication
    replication_password    123456
    agent_user              mmm_agent
    agent_password          RepAgent
</host>

<host db1>                        #agent連接時使用的名稱
    ip      192.168.6.30        
    mode    master                #角色
    peer    db1                  
</host>

<host db2>
    ip      192.168.6.31
    mode    master
    peer    db2
</host>

<host db3>
    ip      192.168.6.107
    mode    slave
</host>

<host db4>
    ip      192.168.6.108
    mode    slave
</host>

<role writer>
    hosts   db1, db2
    ips     192.168.6.35        #VIP
    mode    exclusive            
</role>

<role reader>
    hosts   db3, db4
    ips     192.168.6.36,192.168.6.37
    mode    balanced            #負載均衡
</role>

3.將此文件復制到4個服務器端的指定位置,並更改上面指定位置的文字

scp /etc/mysql-mmm/mmm_common.conf 192.168.6.30:/etc/mysql-mmm/mmm_common.conf
scp /etc/mysql-mmm/mmm_common.conf 192.168.6.31:/etc/mysql-mmm/mmm_common.conf
scp /etc/mysql-mmm/mmm_common.conf 192.168.6.107:/etc/mysql-mmm/mmm_common.conf
scp /etc/mysql-mmm/mmm_common.conf 192.168.6.108:/etc/mysql-mmm/mmm_common.conf

4.在Monitor端編輯mmm-mon文件

vim /etc/mysql-mmm/mmm_mon.conf

include mmm_common.conf

<monitor>
    ip                  127.0.0.1
    pid_path            /run/mysql-mmm-monitor.pid        #因為Monitor端是Centos7所以不需要更改
    bin_path            /usr/libexec/mysql-mmm
    status_path         /var/lib/mysql-mmm/mmm_mond.status
    ping_ips            192.168.6.30,192.168.6.31,192.168.6.107,192.168.6.108    #把4臺服務器的ip寫好
    auto_set_online     10        #設置10秒無響應就顯示離線
</monitor>

<host default>
    monitor_user        mmm_monitor
    monitor_password    RepMonitor
</host>

debug 0

5.啟動監控

systemctl start mysql-mmm-monitor


五.配置四臺mysql服務器的mysql-mmm代理

1.剛才在配置文件中有設置監控用戶和代理用戶,需要在數據庫中進行授權,因為是主從復制,只需要在主上進行授權即可

grant super, replication client, process on *.* to 'mmm_agent'@'192.168.6.%' identified by 'RepAgent';
grant replication client on *.* to 'mmm_monitor'@'192.168.6.%' identified by 'RepMonitor';
flush privleges;

2.分別在4臺服務器上編輯mmm_agent文件

vim /etc/mysql-mmm/mmm_agent.conf

this db3            #是哪個服務器就寫哪個host,跟mmm_common.conf對應

3.開啟對應防火墻端口

iptables -I INPUT -p tcp --dport 9989 -j ACCEPT        #代理服務的端口,需要與Monitor端進行通訊

4.啟動代理服務

systemctl start mysql-mmm-agent        #Centos7
service mysql-mmm-agent start        #Centos6


六.查看Mysql集群狀態及故障測試

1.在Monitor端執行mmm_control show

db1(192.168.6.30) master/ONLINE. Roles: writer(192.168.6.35)

db2(192.168.6.31) master/ONLINE. Roles:

db3(192.168.6.107) slave/ONLINE. Roles: reader(192.168.6.36)

db4(192.168.6.108) slave/ONLINE. Roles: reader(192.168.6.37)

出現如上顯示則表示配置正確

2.故障測試

①在master端執行 service mysql stop

db1(192.168.6.30) master/HARD_OFFLINE. Roles:

db2(192.168.6.31) master/ONLINE. Roles: writer(192.168.6.35)

db3(192.168.6.107) slave/ONLINE. Roles: reader(192.168.6.36)

db4(192.168.6.108) slave/ONLINE. Roles: reader(192.168.6.37)

VIP成功轉移

②在slave端執行 service mysql stop

db1(192.168.6.30) master/HARD_OFFLINE. Roles:

db2(192.168.6.31) master/ONLINE. Roles: writer(192.168.6.35)

db3(192.168.6.107) slave/ONLINE. Roles: reader(192.168.6.36),reader(192.168.6.37)

db4(192.168.6.108) slave/HARD_OFFLINE. Roles:

VIP成功轉移,當重新啟動mysql服務時,狀態就會恢復到ONLINE,到此MMM架構的MySQL集群就已經配置完成了

技術分享圖片

第二部分 利用Amoeba實現單點登錄

由於MMM提供3個出口,1個寫2個讀,要想連接這個架構需要把應用也分成讀和寫兩部分,這裏就可以使用中間件來整合這個架構

Amoeba是什麽?

Amoeba(變形蟲)項目,該開源框架於2008年 開始發布一款 Amoeba for Mysql軟件。這個軟件致力於MySQL的分布式數據庫前端代理層,它主要在應用層訪問MySQL的時候充當SQL路由功能,專註於分布式數據庫代理層(Database Proxy)開發。座落與 Client、DB Server(s)之間,對客戶端透明。具有負載均衡、高可用性、SQL 過濾、讀寫分離、可路由相關的到目標數據庫、可並發請求多臺數據庫合並結果。 通過Amoeba你能夠完成多數據源的高可用、負載均衡、數據切片的功能,目前Amoeba已在很多 企業的生產線上面使用。

開始部署

1.下載amoeba

wget https://sourceforge.net/projects/amoeba/files/Amoeba%20for%20mysql/3.x/amoeba-mysql-3.0.5-RC-distribution.zip

2.安裝

unzip amoeba-mysql-3.0.5-RC-distribution.zip
mv amoeba-mysql-3.0.5-RC /usr/local/amoeba/

3.Java環境安裝

tar xvf jdk-8u111-linux-x64.tar.gz
mv jdk1.8.0_111/ /usr/local/jdk

vim /etc/profile

export JAVA_HOME=/usr/local/jdk
export PATH=${PATH}:${JAVA_HOME}/bin
export CLASSPATH=${CLASSPATH}:${JAVA_HOME}/lib

source /etc/profile

4.配置amoeba

vim /usr/local/amoeba/conf/dbServers.xml

<?xml version="1.0" encoding="gbk"?>

<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">
<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">

                <!-- 
                        Each dbServer needs to be configured into a Pool,
                        If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:
                         add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig
                         such as 'multiPool' dbServer   
                -->

        <dbServer name="abstractServer" abstractive="true">
                <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">
                        <property name="connectionManager">${defaultManager}</property>
                        <property name="sendBufferSize">64</property>
                        <property name="receiveBufferSize">128</property>

                        <!-- mysql port --
                        <property name="port">3306</property>        <!--設置Amoeba要連接的mysql數據庫的端口,默認是3306-->

                        <!-- mysql schema -->
                        <property name="schema">amoeba</property>    <!--設置缺省的數據庫-->

                        <!-- mysql user -->
                        <property name="user">amoeba</property>        <!--設置amoeba連接後端數據庫服務器的賬號和密碼,需要在Mysql中授權-->

                        <property name="password">12345</property>
                </factoryConfig>

                <poolConfig class="com.meidusa.toolkit.common.poolable.PoolableObjectPool">
                        <property name="maxActive">500</property>
                        <property name="maxIdle">500</property>
                        <property name="minIdle">1</property>
                        <property name="minEvictableIdleTimeMillis">600000</property>
                        <property name="timeBetweenEvictionRunsMillis">600000</property>
                        <property name="testOnBorrow">true</property>
                        <property name="testOnReturn">true</property>
                        <property name="testWhileIdle">true</property>
                </poolConfig>
        </dbServer>
                <dbServer name="writedb"  parent="abstractServer">        <!--設置寫服務器名稱-->
                <factoryConfig>
                        <!-- mysql ip -->
                        <property name="ipAddress">192.168.6.35</property>  <!--寫服務器VIP-->
                </factoryConfig>
        </dbServer>

        <dbServer name="slave1"  parent="abstractServer">                <!--設置讀服務器名稱-->
                <factoryConfig>
                        <!-- mysql ip -->
                        <property name="ipAddress">192.168.6.36</property>    <!--讀服務器VIP-->
                </factoryConfig>
        </dbServer>
        <dbServer name="slave2"  parent="abstractServer">
                <factoryConfig>
                        <!-- mysql ip -->
                        <property name="ipAddress">192.168.6.37</property>
                </factoryConfig>
        </dbServer>

        <dbServer name="myslaves" virtual="true">                <!--讀服務器池的名稱-->
                <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">
                        <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->
                        <property name="loadbalance">1</property>        <!--調度算法選擇輪詢-->

                        <!-- Separated by commas,such as: server1,server2,server1 -->
                        <property name="poolNames">slave1,slave2</property>         <!--讀服務器池的服務器-->
                </poolConfig>
        </dbServer>

</amoeba:dbServers>

vim /usr/local/amoeba/conf/amoeba.xml

<?xml version="1.0" encoding="gbk"?>

<!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">
<amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">

        <proxy>

                <!-- service class must implements com.meidusa.amoeba.service.Service -->
                <service name="Amoeba for Mysql" class="com.meidusa.amoeba.mysql.server.MySQLService">
                        <!-- port -->
                        <property name="port">8066</property>        <!--登錄數據庫時連接的端口-->

                        <!-- bind ipAddress -->
                        <!-- 
                        <property name="ipAddress">127.0.0.1</property>    <!--登錄數據庫時連接的IP-->
                         -->

                        <property name="connectionFactory">
                                <bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">
                                        <property name="sendBufferSize">128</property>
                                        <property name="receiveBufferSize">64</property>
                                </bean>
                        </property>

                        <property name="authenticateProvider">
                                <bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">

                                        <property name="user">root</property>        <!--連接數據庫時使用的用戶名-->

                                        <property name="password">123456</property>    <!--連接數據庫時使用的密碼-->

                                        <property name="filter">
                                                <bean class="com.meidusa.toolkit.net.authenticate.server.IPAccessController">
                                                        <property name="ipFile">${amoeba.home}/conf/access_list.conf</property>
                                                </bean>
                                        </property>
                                </bean>
                        </property>

                </service>

                <runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">

                        <!-- proxy server client process thread size -->
                        <property name="executeThreadSize">128</property>

                        <!-- per connection cache prepared statement size  -->
                        <property name="statementCacheSize">500</property>

                        <!-- default charset -->
                        <property name="serverCharset">utf8</property>

                        <!-- query timeout( default: 60 second , TimeUnit:second) -->
                        <property name="queryTimeout">60</property>
                </runtime>

        </proxy>

        <!-- 
                Each ConnectionManager will start as thread
                manager responsible for the Connection IO read , Death Detection
        -->
        <connectionManagerList>
                <connectionManager name="defaultManager" class="com.meidusa.toolkit.net.MultiConnectionManagerWrapper">
                        <property name="subManagerClassName">com.meidusa.toolkit.net.AuthingableConnectionManager</property>
                </connectionManager>
        </connectionManagerList>

                <!-- default using file loader -->
        <dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">
                <property name="configFile">${amoeba.home}/conf/dbServers.xml</property>
        </dbServerLoader>

        <queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">
                <property name="ruleLoader">
                        <bean class="com.meidusa.amoeba.route.TableRuleFileLoader">
                                <property name="ruleFile">${amoeba.home}/conf/rule.xml</property>
                                <property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>
                        </bean>
                </property>
                <property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>
                <property name="LRUMapSize">1500</property>
                <property name="defaultPool">writedb</property>            <!--默認使用的服務器名-->


                <property name="writePool">writedb</property>        <!--寫服務器池的名稱-->
                <property name="readPool">myslaves</property>        <!--讀服務器池的名稱-->
                <property name="needParse">true</property>
        </queryRouter>
</amoeba:configuration>

5.授權amoeba用戶

在master1上執行

create database amoeba;
grant all on *.* to amoeba@"192.168.6.109";
flush privilegs;

6.修改JVM數值

vim /usr/local/amoeba/jvm.properties

JVM_OPTIONS="-server -Xms256m -Xmx1024m -Xss256k         #將Xss數值改成256k以上,否則服務無法啟動

7.啟動服務

/usr/local/amoeba/bin/launcher &

8.在Monitor端登錄Amoeba

mysql -uroot -p123456 -h127.0.0.1 -P8066

技術分享圖片

出現如上顯示登錄成功。

附上最終架構圖:技術分享圖片

MMM架構實現MySQL高可用讀寫分離(進階版,包含Amoeba)