1. 程式人生 > >基於Docker的MYSQL PXC叢集搭建

基於Docker的MYSQL PXC叢集搭建

pxc叢集是Percona XtraDB Cluster的縮寫,是基於percona資料庫和galera中介軟體一種特殊MYSQL資料庫,而且據說percona資料庫的效能要比mysql還要好一些,並且是基於mysql,可以使用mysql的jdbc和客戶端進行連線的。

pxc叢集相比mysql單一資料庫或者mysql主從複製資料庫的優點如下

1、所有節點均可讀可寫,可以輕鬆的起到HA的作用,任意一臺節點宕機都不會影響全域性服務

2、在任意一節點寫入都是事務強一致性的,不會像主從複製那樣使用非同步複製的方案導致主從資料不一致的情況。

3、某一個節點宕機之後,再重啟恢復後會自動的將宕機期間沒有寫入的資料補充回來

所以pxc叢集不僅可以實現傳統的讀寫分離功能,更在高可用和強一致性上提供了超強的解決方案。大大提高了mysql資料庫的可靠性,並且部署還非常簡單,下面以基於ubuntu:16.04的docker為例介紹一下在docker部署pxc叢集的方法。

首先我們需要拉取ubuntu:16.04的作業系統

docker pull ubuntu:16.04

然後我們進入這個docker,安裝wget和vim,一會會用到

C:\Users\alex>docker run -itd ubuntu:16.04 /bin/bash
58470aed38c6569fc5a450a59b18163cbfc47c661c830057caed40e180957b67

C:\Users\alex>docker attach 5
[email protected]
:/# apt-get update
[email protected]:/# apt-get install wget vim

然後下載安裝Percona XtraDB Cluster的apt源,方便下載相關安裝包,注意下面的xenial為ubuntu16.04的代號,要根據作業系統不同而修改

[email protected]:/# cd /home
[email protected]:/home# wget https://repo.percona.com/apt/percona-release_0.1-6.xenial_all.deb

你也可以去https://repo.percona.com/apt/手動選擇deb包。

然後我們就可以安裝這個deb包,並觀察有哪些軟體包可以安裝了

dpkg -i ./percona-release_0.1-6.xenial_all.deb
apt-get update
apt-cache search percona

軟體包列表如下

galera-3 - Replication framework for transactional applications
percona-galera-3 - Galera replication framework for Percona XtraDB Cluster
percona-galera-3-dbg - debugging symbols for percona-galera-3
percona-galera-arbitrator-3 - Galera arbitrator daemon for Percona XtraDB Cluster
percona-galera-arbitrator-3-dbg - debugging symbols for percona-galera-arbitrator-3
percona-server-5.6-dbg - Debugging package for Percona Server
percona-server-server - Percona Server database server
percona-server-server-5.6 - Percona Server database server binaries
percona-server-source-5.6 - Percona Server 5.6 source
percona-server-test - Percona Servere regression test suite
percona-server-test-5.6 - Percona Server database test suite
percona-xtrabackup-dbg - Debug symbols for Percona XtraBackup
percona-xtrabackup-test - Test suite for Percona XtraBackup
percona-xtradb-cluster-server - Percona XtraDB Cluster database server
percona-xtradb-cluster-server-5.6 - Percona XtraDB Cluster database server binaries
xtrabackup - Transitional package for percona-xtrabackup
libperconaserverclient18 - Percona Server database client library
libperconaserverclient18-dev - Percona Server database development files
libperconaserverclient18.1 - Percona Server database client library
libperconaserverclient18.1-dev - Percona Server database development files
libperconaserverclient20 - Percona Server database client library
libperconaserverclient20-dev - Percona Server database development files
percona-cacti-templates - Percona Monitoring Plugins for Cacti
percona-nagios-plugins - Percona Monitoring Plugins for Nagios
percona-release - Package to install Percona gpg key and APT repo
percona-server-5.5-dbg - Debugging package for Percona Server
percona-server-5.7-dbg - Debugging package for Percona Server
percona-server-client - Percona Server database client
percona-server-client-5.5 - Percona Server database client binaries
percona-server-client-5.6 - Percona Server database client binaries
percona-server-client-5.7 - Percona Server database client binaries
percona-server-common-5.5 - Percona Server database common files
percona-server-common-5.6 - Percona Server database common files (e.g. /etc/mysql/my.cnf)
percona-server-common-5.7 - Percona Server database common files (e.g. /etc/mysql/my.cnf)
percona-server-mongodb - This metapackage will install the mongo shell, import/export tools, other client utilities, server software, default configuration, and init.d scripts.
percona-server-mongodb-32 - This metapackage will install the mongo shell, import/export tools, other client utilities, server software, default configuration, and init.d scripts.
percona-server-mongodb-32-dbg - Debugging package for Percona Server for MongoDB
percona-server-mongodb-32-mongos - This package contains mongos - the Percona Server for MongoDB sharded cluster query router
percona-server-mongodb-32-server - This package contains the Percona Server for MongoDB server software, default configuration files and init.d scripts
percona-server-mongodb-32-shell - This package contains the Percona Server for MongoDB shell
percona-server-mongodb-32-tools - Mongo tools for high-performance MongoDB fork from Percona
percona-server-mongodb-34 - This metapackage will install the mongo shell, import/export tools, other client utilities, server software, default configuration, and init.d scripts.
percona-server-mongodb-34-dbg - Debugging package for Percona Server for MongoDB
percona-server-mongodb-34-mongos - This package contains mongos - the Percona Server for MongoDB sharded cluster query router
percona-server-mongodb-34-server - This package contains the Percona Server for MongoDB server software, default configuration files and init.d scripts
percona-server-mongodb-34-shell - This package contains the Percona Server for MongoDB shell
percona-server-mongodb-34-tools - Mongo tools for high-performance MongoDB fork from Percona
percona-server-mongodb-36 - This metapackage will install the mongo shell, import/export tools, other client utilities, server software, default configuration, and init.d scripts.
percona-server-mongodb-36-dbg - Debugging package for Percona Server for MongoDB
percona-server-mongodb-36-mongos - This package contains mongos - the Percona Server for MongoDB sharded cluster query router
percona-server-mongodb-36-server - This package contains the Percona Server for MongoDB server software, default configuration files and init.d scripts
percona-server-mongodb-36-shell - This package contains the Percona Server for MongoDB shell
percona-server-mongodb-36-tools - Mongo tools for high-performance MongoDB fork from Percona
percona-server-mongodb-dbg - Debugging package for Percona Server for MongoDB
percona-server-mongodb-mongos - This package contains mongos - the Percona Server for MongoDB sharded cluster query router
percona-server-mongodb-server - This package contains the Percona Server for MongoDB server software, default configuration files and init.d scripts
percona-server-mongodb-shell - This package contains the Percona Server for MongoDB shell
percona-server-mongodb-tools - Mongo tools for high-performance MongoDB fork from Percona
percona-server-rocksdb-5.7 - MyRocks storage engine plugin for Percona Server
percona-server-server-5.5 - Percona Server database server binaries
percona-server-server-5.7 - Percona Server database server binaries
percona-server-source-5.5 - Percona Server 5.5 source
percona-server-source-5.7 - Percona Server 5.7 source
percona-server-test-5.5 - Percona Server database test suite
percona-server-test-5.7 - Percona Server database test suite
percona-server-tokudb-5.6 - TokuDB engine plugin for Percona Server
percona-server-tokudb-5.7 - TokuDB engine plugin for Percona Server
percona-toolkit - Advanced MySQL and system command-line tools
percona-xtrabackup - Open source backup tool for InnoDB and XtraDB
percona-xtrabackup-24 - Open source backup tool for InnoDB and XtraDB
percona-xtrabackup-dbg-24 - Debug symbols for Percona XtraBackup
percona-xtrabackup-test-24 - Test suite for Percona XtraBackup
percona-xtradb-cluster-5.6-dbg - Debugging package for Percona XtraDB Cluster
percona-xtradb-cluster-5.7-dbg - Debugging package for Percona XtraDB Cluster
percona-xtradb-cluster-56 - Percona XtraDB Cluster with Galera
percona-xtradb-cluster-57 - Percona XtraDB Cluster with Galera
percona-xtradb-cluster-client-5.6 - Percona XtraDB Cluster database client binaries
percona-xtradb-cluster-client-5.7 - Percona XtraDB Cluster database client binaries
percona-xtradb-cluster-common-5.6 - Percona XtraDB Cluster database common files (e.g. /etc/mysql/my.cnf)
percona-xtradb-cluster-common-5.7 - Percona XtraDB Cluster database common files (e.g. /etc/mysql/my.cnf)
percona-xtradb-cluster-full-56 - Percona XtraDB Cluster with Galera
percona-xtradb-cluster-full-57 - Percona XtraDB Cluster with Galera
percona-xtradb-cluster-galera-3 - Metapackage for latest version of galera3.
percona-xtradb-cluster-galera-3.x - Galera components of Percona XtraDB Cluster
percona-xtradb-cluster-galera-3.x-dbg - Debugging package for Percona XtraDB Cluster Galera 3.
percona-xtradb-cluster-galera3-dbg - Metapackage for latest version of debug packages.
percona-xtradb-cluster-garbd-3 - Metapackage for latest version of garbd3.
percona-xtradb-cluster-garbd-3.x - Garbd components of Percona XtraDB Cluster
percona-xtradb-cluster-garbd-3.x-dbg - Debugging package for Percona XtraDB Cluster Garbd 3.
percona-xtradb-cluster-garbd-5.7 - Garbd components of Percona XtraDB Cluster
percona-xtradb-cluster-garbd-debug-5.7 - Debugging package for Percona XtraDB Cluster Garbd.
percona-xtradb-cluster-server-5.7 - Percona XtraDB Cluster database server binaries
percona-xtradb-cluster-server-debug-5.6 - Percona XtraDB Cluster database server UNIV_DEBUG binaries
percona-xtradb-cluster-server-debug-5.7 - Percona XtraDB Cluster database server UNIV_DEBUG binaries
percona-xtradb-cluster-source-5.6 - Percona XtraDB Cluster 5.6 source
percona-xtradb-cluster-source-5.7 - Percona XtraDB Cluster 5.7 source
percona-xtradb-cluster-test-5.6 - Percona XtraDB Cluster database test suite
percona-xtradb-cluster-test-5.7 - Percona XtraDB Cluster database test suite
percona-zabbix-templates - Percona Monitoring Plugins for Zabbix
pmm-client - Percona Monitoring and Management Client

可以發現,有很多軟體包可以選擇,由於建立多節點讀寫結構的叢集需要用到galera外掛,所以我們不如一步到位安裝完整版本,這樣安裝大小雖然會有點大,但是不用費腦筋去研究每個包的功能和依賴了,方法如下

apt-get install percona-xtradb-cluster-full-57

這樣也是官方手冊上推薦的方案,大概有下面這些軟體包需要安裝

The following additional packages will be installed:
  bzip2 debsums ifupdown iproute iproute2 isc-dhcp-client isc-dhcp-common krb5-locales libaio1 libasn1-8-heimdal
  libatm1 libboost-program-options1.58.0 libbsd0 libcurl3 libdbd-mysql-perl libdbi-perl libdns-export162 libdpkg-perl
  libev4 libffi6 libfile-fcntllock-perl libfile-fnmatch-perl libgdbm3 libgmp10 libgnutls30 libgssapi-krb5-2
  libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhogweed4 libhx509-5-heimdal
  libisc-export160 libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 libmecab2
  libmnl0 libmysqlclient20 libnettle6 libnuma1 libp11-kit0 libperl5.22 libpopt0 libroken18-heimdal librtmp1 libsasl2-2
  libsasl2-modules libsasl2-modules-db libtasn1-6 libwind0-heimdal libwrap0 libxtables11 lsof mysql-common netbase
  netcat-openbsd percona-xtrabackup-24 percona-xtradb-cluster-5.7-dbg percona-xtradb-cluster-client-5.7
  percona-xtradb-cluster-common-5.7 percona-xtradb-cluster-garbd-5.7 percona-xtradb-cluster-garbd-debug-5.7
  percona-xtradb-cluster-server-5.7 percona-xtradb-cluster-server-debug-5.7 percona-xtradb-cluster-test-5.7 perl
  perl-modules-5.22 psmisc qpress rename rsync socat tcpd ucf xz-utils
Suggested packages:
  bzip2-doc ppp rdnssd iproute2-doc resolvconf avahi-autoipd isc-dhcp-client-ddns apparmor libclone-perl libmldbm-perl
  libnet-daemon-perl libsql-statement-perl debian-keyring gcc | c-compiler binutils patch gnutls-bin krb5-doc
  krb5-user libsasl2-modules-otp libsasl2-modules-ldap libsasl2-modules-sql libsasl2-modules-gssapi-mit
  | libsasl2-modules-gssapi-heimdal tinyca pv perl-doc libterm-readline-gnu-perl | libterm-readline-perl-perl make
  openssh-client openssh-server
The following NEW packages will be installed:
  bzip2 debsums ifupdown iproute iproute2 isc-dhcp-client isc-dhcp-common krb5-locales libaio1 libasn1-8-heimdal
  libatm1 libboost-program-options1.58.0 libbsd0 libcurl3 libdbd-mysql-perl libdbi-perl libdns-export162 libdpkg-perl
  libev4 libffi6 libfile-fcntllock-perl libfile-fnmatch-perl libgdbm3 libgmp10 libgnutls30 libgssapi-krb5-2
  libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal libhogweed4 libhx509-5-heimdal
  libisc-export160 libk5crypto3 libkeyutils1 libkrb5-26-heimdal libkrb5-3 libkrb5support0 libldap-2.4-2 libmecab2
  libmnl0 libmysqlclient20 libnettle6 libnuma1 libp11-kit0 libperl5.22 libpopt0 libroken18-heimdal librtmp1 libsasl2-2
  libsasl2-modules libsasl2-modules-db libtasn1-6 libwind0-heimdal libwrap0 libxtables11 lsof mysql-common netbase
  netcat-openbsd percona-xtrabackup-24 percona-xtradb-cluster-5.7-dbg percona-xtradb-cluster-client-5.7
  percona-xtradb-cluster-common-5.7 percona-xtradb-cluster-full-57 percona-xtradb-cluster-garbd-5.7
  percona-xtradb-cluster-garbd-debug-5.7 percona-xtradb-cluster-server-5.7 percona-xtradb-cluster-server-debug-5.7
  percona-xtradb-cluster-test-5.7 perl perl-modules-5.22 psmisc qpress rename rsync socat tcpd ucf xz-utils
0 upgraded, 80 newly installed, 0 to remove and 2 not upgraded.
Need to get 262 MB of archives.
After this operation, 960 MB of additional disk space will be used.

如果網速好的話,等待三五分鐘就能安裝完畢,然後我們只需簡單配置一下配置檔案一個pxc叢集就搭建好了。

在安裝快要結束的時候會需要你輸入資料庫登陸的root密碼

Configuring percona-xtradb-cluster-server-5.7
---------------------------------------------

Data directory found when no Percona Server package is installed

A data directory '/var/lib/mysql' is present on this system when no MySQL server package is currently installed on the
system. The directory may be under control of server package received from third-party vendors. It may also be an
unclaimed data directory from previous removal of mysql packages.

It is highly recommended to take data backup. If you have not done so, now would be the time to take backup in another
shell. Once completed, press 'Ok' to continue.

Please provide a strong password that will be set for the root account of your MySQL database. Leave it blank to enable
password less login using UNIX socket based authentication.

Enter root password:

Now that you have selected a password for the root account, please confirm by typing it again. Do not share the
password with anyone.

Re-enter root password:

在安裝完畢之後,percona資料庫會不自覺的開啟,所以我們首先要做的就是關閉

service mysql stop

到這裡為止,主從節點都是一樣的,下面要根據不同節點寫不同的配置檔案,並使用不同的啟動方式

通過deb包安裝的percona資料庫不要試圖修改/etc/mysql/my.cnf,而要把和節點有關的配置寫到/etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf裡面,這個檔案已經給了模板,我們只需稍加修改即可

[mysqld]
# Path to Galera library
wsrep_provider=/usr/lib/galera3/libgalera_smm.so  #這裡是galera框架的庫地址,需要保證你的磁碟上有這個檔案

# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
wsrep_cluster_address=gcomm://172.17.0.2,172.17.0.3,172.7.0.4 #在這裡記錄叢集裡所有節點的資訊,但不是所有節點都需要開啟

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW #只能填ROW

# MyISAM storage engine has only experimental support
default_storage_engine=InnoDB

# Slave thread to use
wsrep_slave_threads= 8

wsrep_log_conflicts

# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node IP address
wsrep_node_address=172.17.0.2 #填寫當前節點的IP地址
# Cluster name
wsrep_cluster_name=pxc-cluster #填寫叢集的名字,系群裡所有節點都要填的一樣

#If wsrep_node_name is not specified,  then system hostname will be used
wsrep_node_name=pxc-cluster-node-1 #當前節點的名稱,隨便填

#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
pxc_strict_mode=ENFORCING #開啟嚴格強一致模式,沒有主鍵的表將不能操作

# SST method
wsrep_sst_method=xtrabackup-v2

#Authentication for SST method
wsrep_sst_auth=sstuser:123456 #設定一個使用者名稱和密碼,這個一會開啟mysql後還需要手動再配置一遍

主節點和從節點基本配置都是一樣的,區別就兩行

  • For the second node:

    wsrep_node_name=pxc2
    wsrep_node_address=172.17.0.3
    
  • For the third node:

    wsrep_node_name=pxc3
    wsrep_node_address=172.17.0.4

節點的啟動是有講究的,第一個節點要建立上面配置檔案裡面寫的sstuser使用者,並且使用bootstrap啟動,其餘兩個節點都可以簡單的啟動mysql即可。其中在第一個節點要建立如下使用者

mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY '123456';
mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
mysql> FLUSH PRIVILEGES;

然後就可以開啟第一個節點了

/etc/init.d/mysql bootstrap-pxc

然後可以執行一個命令檢視當前節點的工作狀態,注意另外兩個節點不開啟也不妨礙當前節點正常工作,在另外兩個節點開啟後,已經插入的資料會同步到另外兩個節點上去。

mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | 39e4729d-6345-11e8-8c52-0b4b3546a2be |
| ...                        | ...                                  |
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
| ...                        | ...                                  |
| wsrep_cluster_size         | 1                                    |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
| ...                        | ...                                  |
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)

開啟從節點就簡單的多了,只需要把配置檔案按照主節點寫好,然後啟動一下就可以了

/etc/init.d/mysql start

然後在從庫上就可以檢視當前工作狀態

mysql> show status like 'wsrep%';
+----------------------------------+--------------------------------------+
| Variable_name                    | Value                                |
+----------------------------------+--------------------------------------+
| wsrep_local_state_uuid           | 39e4729d-6345-11e8-8c52-0b4b3546a2be |
| wsrep_protocol_version           | 8                                    |
| wsrep_last_applied               | 3                                    |
| wsrep_last_committed             | 3                                    |
| wsrep_replicated                 | 0                                    |
| wsrep_replicated_bytes           | 0                                    |
| wsrep_repl_keys                  | 0                                    |
| wsrep_repl_keys_bytes            | 0                                    |
| wsrep_repl_data_bytes            | 0                                    |
| wsrep_repl_other_bytes           | 0                                    |
| wsrep_received                   | 3                                    |
| wsrep_received_bytes             | 240                                  |
| wsrep_local_commits              | 0                                    |
| wsrep_local_cert_failures        | 0                                    |
| wsrep_local_replays              | 0                                    |
| wsrep_local_send_queue           | 0                                    |
| wsrep_local_send_queue_max       | 1                                    |
| wsrep_local_send_queue_min       | 0                                    |
| wsrep_local_send_queue_avg       | 0.000000                             |
| wsrep_local_recv_queue           | 0                                    |
| wsrep_local_recv_queue_max       | 1                                    |
| wsrep_local_recv_queue_min       | 0                                    |
| wsrep_local_recv_queue_avg       | 0.000000                             |
| wsrep_local_cached_downto        | 0                                    |
| wsrep_flow_control_paused_ns     | 0                                    |
| wsrep_flow_control_paused        | 0.000000                             |
| wsrep_flow_control_sent          | 0                                    |
| wsrep_flow_control_recv          | 0                                    |
| wsrep_flow_control_interval      | [ 141, 141 ]                         |
| wsrep_flow_control_interval_low  | 141                                  |
| wsrep_flow_control_interval_high | 141                                  |
| wsrep_flow_control_status        | OFF                                  |
| wsrep_cert_deps_distance         | 0.000000                             |
| wsrep_apply_oooe                 | 0.000000                             |
| wsrep_apply_oool                 | 0.000000                             |
| wsrep_apply_window               | 0.000000                             |
| wsrep_commit_oooe                | 0.000000                             |
| wsrep_commit_oool                | 0.000000                             |
| wsrep_commit_window              | 0.000000                             |
| wsrep_local_state                | 4                                    |
| wsrep_local_state_comment        | Synced                               |
| wsrep_cert_index_size            | 0                                    |
| wsrep_cert_bucket_count          | 22                                   |
| wsrep_gcache_pool_size           | 1456                                 |
| wsrep_causal_reads               | 0                                    |
| wsrep_cert_interval              | 0.000000                             |
| wsrep_ist_receive_status         |                                      |
| wsrep_ist_receive_seqno_start    | 0                                    |
| wsrep_ist_receive_seqno_current  | 0                                    |
| wsrep_ist_receive_seqno_end      | 0                                    |
| wsrep_incoming_addresses         | 172.17.0.2:3306,172.17.0.3:3306      |
| wsrep_desync_count               | 0                                    |
| wsrep_evs_delayed                |                                      |
| wsrep_evs_evict_list             |                                      |
| wsrep_evs_repl_latency           | 0/0/0/0/0                            |
| wsrep_evs_state                  | OPERATIONAL                          |
| wsrep_gcomm_uuid                 | 84fe5351-6346-11e8-9080-ea323a394028 |
| wsrep_cluster_conf_id            | 2                                    |
| wsrep_cluster_size               | 2                                    |
| wsrep_cluster_state_uuid         | 39e4729d-6345-11e8-8c52-0b4b3546a2be |
| wsrep_cluster_status             | Primary                              |
| wsrep_connected                  | ON                                   |
| wsrep_local_bf_aborts            | 0                                    |
| wsrep_local_index                | 1                                    |
| wsrep_provider_name              | Galera                               |
| wsrep_provider_vendor            | Codership Oy <[email protected]>    |
| wsrep_provider_version           | 3.26(rac090bc)                       |
| wsrep_ready                      | ON                                   |
+----------------------------------+--------------------------------------+
68 rows in set (0.00 sec)

可以從wsrep_incoming_addresses這一項發現當前有幾個啟用的了的節點

主庫和從庫的區別就在於啟動方式不一樣,誰以bootstrap方式啟動誰就是主庫,一個從庫可以前後隸屬於不同的主庫,比如一開始跟A是叢集,然後我把從庫停掉,修改從庫的配置檔案,把從庫歸到B叢集裡,那麼在啟動過程中就會自動的把從庫的資料全部替換為B叢集的資料,非常方便,不需要手動干預。

此時我們就可以在主庫和從庫裡寫東西,看看兩邊是否同步了。

像第二個一樣,開啟第三個節點後,可以在第三個節點上觀察叢集的執行情況

mysql> show status like 'wsrep%';
+----------------------------------+-------------------------------------------------+
| Variable_name                    | Value                                           |
+----------------------------------+-------------------------------------------------+
| wsrep_local_state_uuid           | 39e4729d-6345-11e8-8c52-0b4b3546a2be            |
| wsrep_protocol_version           | 8                                               |
| wsrep_last_applied               | 4                                               |
| wsrep_last_committed             | 4                                               |
| wsrep_replicated                 | 0                                               |
| wsrep_replicated_bytes           | 0                                               |
| wsrep_repl_keys                  | 0                                               |
| wsrep_repl_keys_bytes            | 0                                               |
| wsrep_repl_data_bytes            | 0                                               |
| wsrep_repl_other_bytes           | 0                                               |
| wsrep_received                   | 3                                               |
| wsrep_received_bytes             | 320                                             |
| wsrep_local_commits              | 0                                               |
| wsrep_local_cert_failures        | 0                                               |
| wsrep_local_replays              | 0                                               |
| wsrep_local_send_queue           | 0                                               |
| wsrep_local_send_queue_max       | 1                                               |
| wsrep_local_send_queue_min       | 0                                               |
| wsrep_local_send_queue_avg       | 0.000000                                        |
| wsrep_local_recv_queue           | 0                                               |
| wsrep_local_recv_queue_max       | 1                                               |
| wsrep_local_recv_queue_min       | 0                                               |
| wsrep_local_recv_queue_avg       | 0.000000                                        |
| wsrep_local_cached_downto        | 0                                               |
| wsrep_flow_control_paused_ns     | 0                                               |
| wsrep_flow_control_paused        | 0.000000                                        |
| wsrep_flow_control_sent          | 0                                               |
| wsrep_flow_control_recv          | 0                                               |
| wsrep_flow_control_interval      | [ 173, 173 ]                                    |
| wsrep_flow_control_interval_low  | 173                                             |
| wsrep_flow_control_interval_high | 173                                             |
| wsrep_flow_control_status        | OFF                                             |
| wsrep_cert_deps_distance         | 0.000000                                        |
| wsrep_apply_oooe                 | 0.000000                                        |
| wsrep_apply_oool                 | 0.000000                                        |
| wsrep_apply_window               | 0.000000                                        |
| wsrep_commit_oooe                | 0.000000                                        |
| wsrep_commit_oool                | 0.000000                                        |
| wsrep_commit_window              | 0.000000                                        |
| wsrep_local_state                | 4                                               |
| wsrep_local_state_comment        | Synced                                          |
| wsrep_cert_index_size            | 0                                               |
| wsrep_cert_bucket_count          | 22                                              |
| wsrep_gcache_pool_size           | 1456                                            |
| wsrep_causal_reads               | 0                                               |
| wsrep_cert_interval              | 0.000000                                        |
| wsrep_ist_receive_status         |                                                 |
| wsrep_ist_receive_seqno_start    | 0                                               |
| wsrep_ist_receive_seqno_current  | 0                                               |
| wsrep_ist_receive_seqno_end      | 0                                               |
| wsrep_incoming_addresses         | 172.17.0.4:3306,172.17.0.2:3306,172.17.0.3:3306 |
| wsrep_desync_count               | 0                                               |
| wsrep_evs_delayed                |                                                 |
| wsrep_evs_evict_list             |                                                 |
| wsrep_evs_repl_latency           | 0/0/0/0/0                                       |
| wsrep_evs_state                  | OPERATIONAL                                     |
| wsrep_gcomm_uuid                 | 0e5b5865-6348-11e8-8a4c-b370847054fd            |
| wsrep_cluster_conf_id            | 3                                               |
| wsrep_cluster_size               | 3                                               |
| wsrep_cluster_state_uuid         | 39e4729d-6345-11e8-8c52-0b4b3546a2be            |
| wsrep_cluster_status             | Primary                                         |
| wsrep_connected                  | ON                                              |
| wsrep_local_bf_aborts            | 0                                               |
| wsrep_local_index                | 0                                               |
| wsrep_provider_name              | Galera                                          |
| wsrep_provider_vendor            | Codership Oy <[email protected]>               |
| wsrep_provider_version           | 3.26(rac090bc)                                  |
| wsrep_ready                      | ON                                              |
+----------------------------------+-------------------------------------------------+
68 rows in set (0.01 sec)

同時,第三個節點開啟之前前兩個節點寫入的資料都成功同步到了第三節點。

同時,不光從節點可以自由的切換自己所在的叢集,主節點也是一樣。可以通過啟動上的區別加入別的叢集成為一個從節點,或者組織新的叢集。

如果想使用資料庫中介軟體實現讀寫分離,那麼可以參考我的另外兩篇部落格


相關推薦

基於Docker的MYSQL PXC叢集搭建

pxc叢集是Percona XtraDB Cluster的縮寫,是基於percona資料庫和galera中介軟體一種特殊MYSQL資料庫,而且據說percona資料庫的效能要比mysql還要好一些,並且是基於mysql,可以使用mysql的jdbc和客戶端進行連線的。pxc叢

MySQL之PXC叢集搭建

一、PXC 介紹 1.1 PXC 簡介 PXC 是一套 MySQL 高可用叢集解決方案,與傳統的基於主從複製模式的叢集架構相比 PXC 最突出特點就是解決了詬病已久的資料複製延遲問題,基本上可以達到實時同步。而且節點與節點之間,他們相互的關係是對等的。PXC 最關注的是資料的一致性,對待事物的行為時,要麼在所

Solr11-SolrCloud的詳細部署過程(基於Solr4.10.4搭建Solr叢集)

文章目錄 1 SolrCloud結構說明 2 環境的安裝 2.1 環境說明 2.2 部署並啟動ZooKeeper叢集 2.3 部署Solr單機服務 2.4 新增Solr的索引庫 3 部署Solr叢集服務(Sol

Flume NG高可用叢集搭建詳解(基於flume-1.7.0)

1、Flume NG簡述 Flume NG是一個分散式,高可用,可靠的系統,它能將不同的海量資料收集,移動並存儲到一個數據儲存系統中。輕量,配置簡單,適用於各種日誌收集,並支援 Failover和負載均衡。並且它擁有非常豐富的元件。Flume NG採用的是三層架構:Agent層,Collecto

Dell R410/R620 基於Windows Server 2012搭建HPC叢集伺服器

(搭建叢集伺服器的方法有很多種,這裡只是寫一下我走通的這條路,還有很多其他方法也可以搭建成功,我這個方法也肯定不是最佳的方法……) 一、安裝系統 這一步相信大家應該都沒什麼問題了,伺服器的系統安裝與桌上型電腦,筆記本的系統安裝是一樣的。不過我在這塊卻花了比較久的時間,第一次是因為伺服器的光碟

Mysql搭建PXC叢集 - Percona XtraDB Cluster

pxc叢集是Percona XtraDB Cluster的縮寫,是基於percona資料庫和galera中介軟體一種特殊MYSQL資料庫,而且據說percona資料庫的效能要比mysql還要好一些,並且是基於mysql,可以使用mysql的jdbc和客戶端進行連線的。 pxc叢集相比mysql單

搭建PXC叢集,實現MySQL高可用叢集

Percona XtraDB Cluster(下文簡稱PXC叢集)提供了MySQL高可用的一種實現方法。PXC叢集以節點組成(推薦至少3節點,便於故障恢復),每個節點都是基於常規的 MySQL/Percona Server,意味著你可以從叢集中分離出某節點單獨使用。叢集中每個

Docker搭建PXC叢集

如何建立MySQL的PXC叢集 下載PXC叢集映象檔案 下載 docker pull percona/percona-xtradb-cluster 重新命名 [[email protected] ~]# docker tag docker.io/percona/percona-xtradb

【從零開始/親測國內外均可】基於阿里雲Ubuntu的kubernetes(k8s)主從節點分散式叢集搭建——分步詳細攻略v1.11.3【準備工作篇】

從零開始搭建k8s叢集——香港節點無牆篇【大陸節點有牆的安裝方法我會在每一步操作的時候提醒大家的注意,並告訴大家如何操作】 由於容器技術的火爆,現在使用K8s開展服務變得越來越廣泛了。 本攻略是基於阿里雲主機搭建的一個單主節點和單從節點的最簡k8s分散式叢集。 為了製作

mysql高可用-基於docker容器下的pxc叢集方案

mysql單機效能測試 mysqlslap -h192.168.1.16 -uroot -p123456 -P3306 --concurrency=5000 --iterations=1 --auto-generate-sql --auto-generate-sql-loa

基於Docker的ETCD叢集搭建

  etcd是一個高可用的鍵值儲存系統,主要用於共享配置和服務發現。etcd是由CoreOS開發並維護的,靈感來自於 ZooKeeper 和 Doozer,它使用Go語言編寫,並通過Raft一致性演算法處理日誌複製以保證強一致性。Raft是一個來自Stanford的新的一致

centos7.2 基於zookeeper叢集搭建activeMQ的叢集

activemq作為訊息佇列中介軟體,在分散式系統中扮演中系統之間通訊兵的角色,非常重要,使用也非常廣泛,而在分散式系統中,為了消除單點故障,activemq也必將是要做叢集,做主備的切換的,在網上activemq的叢集方案也是很多,其本身也是有主備解決方案的。目前市面上,使

Spark叢集搭建+基於zookeeper的高可用HA

export JAVA_HOME=/usr/java/jdk1.8.0_20/ export SCALA_HOME=/home/iespark/hadoop_program_files/scala-2.10.6/ export HADOOP_HOME=/home/iespark/hadoop_program

基於springBoot的zookeeper叢集搭建(dubbo2.*版本)

主機系統環境準備 Jdk1.7+,window系統(使用window10+window7),或者linux系統(本次測試使用centos7) 第一步:主機名稱到ip地址對映配置 zookeeper叢集中具有兩個關鍵的角色:leader和follower。 叢集中所有的結點

基於哨兵(Sentinel)模式搭建Redis叢集搭建

這篇文章主要是想把自己搭建Redis哨兵模式叢集的過程記下來,方便後面搭建的重複性工作。 首先一點,學習任何知識都要學會看官網,所以,可以參考官網進行配置。我為了省事兒,參照了公司的規範來搭建的。官網地址:https://redis.io/ 有個官方下載地址,可以直接下載各

Elasticsearch基於docker叢集搭建以及安裝ik分詞器

由於機器沒這麼多,所以用docker模擬真正的叢集搭建。 1、準備工作 1-1、準備docker環境: 使用yum安裝docker: yum install -y docker-io 安裝完成後,開啟docker: systemctl start docker; 檢視d

Spark叢集基於Zookeeper的HA搭建部署

官方文件 下載地址http://flume.apache.org/download.html hadoop HA安裝步驟 http://blog.csdn.net/haoxiaoyan/article/details/52623393 zookeeper安裝步驟 http

基於 Harbor 和 Cephfs 搭建高可用 Docker 映象倉庫叢集

目錄 Harbor & Cephfs 介紹 環境、軟體準備 Cephfs 檔案系統建立 單節點 Harbor 服務搭建 安裝 Harbor 配置掛載路徑 配置使用外部資料庫 多節點 Harbor 叢集服務搭建 測試 Habor 叢集 1、Ha

redis-5.0.0基於Redis官方工具的叢集搭建(排坑指南,從安裝Redis開始)

一、前言 搭建Redis叢集的教程很多,介紹的也很全面,但是大多數 還是使用Ruby指令碼的版本,安裝Ruby本身也不是一件簡單的事情,redis-5.0.0之後已經將 redis-trib.rb 指令碼的功能全部整合到 redis-cli之中了,本章基於redis-cli 的&nbs

基於CentOS6.5系統Hadoop2.7.3完全分散式叢集搭建詳細步驟

 前言:本次搭建hadoop叢集使用虛擬機器克隆方式克隆slave節點,這樣做可以省去很多不必要的操作,來縮短我們的搭建時間。 一、所需硬體,軟體要求 使用 VMWare構建三臺虛擬機器模擬真實物理環境 作業系統:CentOS6.5 二、必備條件 hadoop搭建需