一:環境準備:
應用 | 主機 |
---|---|
mysql-master | 192.168.205.184 |
mysql-slave | 192.168.205.185 |
mycat-01,keeplived,jdk | 192.168.205.182 |
mycat-02,keeplived,jdk | 192.168.205.183 |
mysql主從環境(略)
二: 主機(192.168.205.183,192.168.205.182)上安裝jdk,mycat,keeplived
以192.168.205.183主機為例,另外一臺主機配置與183主機一致:
1.安裝jdk
上傳jdk安裝包解壓安裝到/usr/local/jdk
vim /etc/profile
export JAVA_HOME=/usr/local/jdk
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
. /etc/profile
2.官網下載安裝mycat
wget http://dl.mycat.org.cn/1.6.7.4/Mycat-server-1.6.7.4-release/Mycat-server-1.6.7.4-release-20200105164103-linux.tar.gz
tar -zxvf Mycat-server-1.6.7.4-release-20200105164103-linux.tar.gz
mv mycat/ /opt/
3.修改配置檔案
cd /opt/mycat/conf
vim server.xml
vim schema.xml
4.啟動mycat服務
cd /opt/mycat/bin
./mycat start
mycat啟動後會有2個埠8066(資料連線埠),9066(管理埠)
ss -ntupl
5.通過mysql客戶端連線8066(我們在mysql-master節點登入mycat試下)
mysql -ulilong -p111111 -P8066 -h 192.168.205.183
6.檢視mycat裡資料庫資訊
7.進入TSDB建立表city
crate table city(id int,name varchar(8),area float(5,2),people_num int);
檢視資料庫中是否建立成功
show tables;
在2臺數據庫節點看是否已存在此表(由於做了讀寫分離在建立表時走了寫節點也就是主庫,從庫也同步過來了)
8.通過mysql客戶端連線9066可以檢視到相關dataNode資訊
mysql -ulilong -p111111 -P9066 -h 192.168.205.183
show @@dataNode;
在2臺mycat節點上安裝keepalived並單獨配置日誌檔案(預設在/var/log/messages日誌檢視不方便)
192.168.205.182:
yum -y install keepalived
vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D -d -S 0"
vim /etc/rsyslog.conf
local0.* /var/log/keepalived.log
systemctl restart rsyslog
vim /etc/keepalived/keepalived.conf
192.168.205.183:
yum -y install keepalived
vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D -d -S 0"
vim /etc/rsyslog.conf
local0.* /var/log/keepalived.log
systemctl restart rsyslog
vim /etc/keepalived/keepalived.conf
分別啟動2節點keepalived
192.168.205.182:
systemctl start keepalived
可以看到vip介面在182節點上
192.168.205.183:
systemctl start keepalived
在mysql節點上我們通過vip介面登入試下
mysql -ulilong -p111111 -P8066 -h 192.168.205.250
下圖我們看到已經登入成功
我們停用192.168.205.182的keepalived試下看是否轉移到192.168.205.183上
可以看到182節點vip介面不在了
在看下183節點vip介面已飄逸到此節點上
我們再次啟動主節點上keepalived發現vip介面已經飄逸回來,預設情況下keepalived是搶佔模式所以主節點恢復後會直接搶佔回來
到此為止基本完成配置,可仔細想想如果是所在keepalived的master節點mycat服務有問題那它還會切換到備用節點嗎。答案肯定是否定的。這就需要在keepalived 配置指令碼來檢測mycat服務狀態如果所在節點master的mycat服務掛掉了, 那就主動結束所在master節點的keepalived程序切換至備用節點繼續提供服務
2節點分別建立存放指令碼的檔案
mkdir -p /etc/keepalived/scripts
vim /etc/keepalived/scripts/chk_mycat.sh
#!/bin/bash
MYCAT_PORT=ss -ntupl | egrep '8066|9066' | wc -l
if [ $MYCAT_PORT -ne 2 ];then
pkill keepalived
fi
在 /etc/keepalived/keepalived.conf配置檔案中將以下內容加入到相應位置
vrrp_script chk_mycat {
script "/etc/keepalived/scripts/chk_mycat.sh"
interval 2
weight -50
fall 3
rise 3
timeout 3
}
track_script {
chk_mycat
}
重新啟動keepalived服務
systemctl restart keepalived
驗證檢測指令碼是否生效:
將master節點上mycat服務手動停掉
cd /opt/mycat/bin && ./mycat stop
此時master節點keepalived服務已經停止了
在看下backup節點,發現vip已經飄移到此節點上
從mysql客戶端訪問正常
配置MASTER<-->slave 相互切換時郵件通知
2個keepalived節點安裝postfix郵件服務(預設已安裝)和 傳送郵件外掛mailx
yum -y install mailx
vim /etc/mail.rc
set from=[email protected]
set smtp=smtp.exmail.qq.com
set smtp-auth-user=[email protected]
set smtp-auth-password=***
smtp-auth=login
將以下配置加入下面相應位置(2臺做同樣操作)
MASTER,BACKUP,FAULT(大小寫均可)
vim /etc/keepalived/keepalived.conf
notify_master "/etc/keepalived/scripts/notify.sh MASTER"
notify_backup "/etc/keepalived/scripts/notify.sh BACKUP"
notify_fault "/etc/keepalived/scripts/notify.sh FAULT"
建立相應指令碼檔案(2臺做同樣操作)
cd /etc/keepalived/scripts
vim notify.sh
#!/bin/bash
SEND_to_MAIL='[email protected]'
notify() {
MAIL_SUBJECT="$(hostname) to be $1, vip 轉移"
MAIL_TEXT="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$MAIL_TEXT" | mail -s "$MAIL_SUBJECT" $SEND_to_MAIL
}
case $1 in
MASTER)
notify MASTER
;;
BACKUP)
notify BACKUP
;;
FAULT)
notify FAULT
;;
*)
echo "Usage: $(basename $0) {MASTER|BACKUP|FAULT}"
exit 1
;;
esac
更改指令碼執行許可權
chmod +x notify
重新啟動keepalived服務
systemctl restart keepalived
已可以看到vip已轉移到備用節點上
-------------------------------------------------------------------------------------------------------------------------------------
mycat分表
1.我們以2個dataNode節點為例子在mysql-master庫上新建cs1_db,mysql-slave會同步cs1_db庫,之前已建過cs_db庫,在2個庫裡分別新建表'wuhan'
2.mycat節點(2臺分表做以下配置)
vim schema.xml
vim rule.xml
3.重啟mycat服務
cd /opt/mycat/bin && ./mycat restart
4.通過VIP介面連線mycat服務,插入資料
mysql -ulilong -p111111 -P8066 -h 192.168.205.250
insert into wuhan(id,address) values(7,'xx'),(8,'xg'),(9,'lx'),(10,'ob'),(11,'kx'),(12,'hx');
5.檢視2個數據庫wuhan表中資料,發現數據已經分表插入到了2個數據庫中同一表裡
6.可以看下mysql-slave庫也同步了
到此所有配置已結束