1. 程式人生 > >Redis的高可用(使用篇)

Redis的高可用(使用篇)

為什麽 psu enable vpd 分享圖片 master api nbsp led

Redis的復制解決了單點問題,但主節點若出現故障,則要人工幹預進行故障轉移。先看看1主2從(master,slave-1和slave-2)的Redis主從模式下,如何進行故障轉移的。


1. 主節點發生故障後,客戶端連接主節點失敗,兩個從節點與主節點連接失敗造成復制中斷。

2. 需要選出一個從節點(slave-1),對其執行slaveof no one命令使其成為新的主節點(new-master)。

3. 從節點(slave-1)成為新的主節點後,更新應用方的主節點信息,重新啟動應用方。

4. 客戶端命令另一個從節點(slave-2)去復制新的主節點(new-master)。

5. 待原來的主節點恢復後,讓它去復制新的主節點。


如上人工幹預的過程,很難保證準確性,實效性,這正是Redis Sentinel要解決的問題。


Redis Sentinel是一個分布式架構,其中包含若幹個Sentinel節點和Redis數據節點。每個Sentinel節點會對數據節點和其余Sentinel節點進行監控,當它發現節點不可達時,會對節點做下線標識。如果被標識的是主數據節點,它還會和其它Sentinel節點進行協商,當大多數Sentinel節點都認為主節點不可達時,它們會選舉出一個Sentinel節點來完成自動故障轉移的工作,同時會將這個變化實時通知給Redis應用方。整個過程不需人工介入,有效的解決了Redis的高可用問題。



部署Redis Sentinel的高可用架構


1. 搭建3個Redis數據節點,初始狀態:master節點,6479端口;slave-1節點,6480端口和slave-2節點,6481端口。

127.0.0.1:6479> info replication

# Replication

role:master

connected_slaves:2

slave0:ip=127.0.0.1,port=6480,state=online,offset=845,lag=0

slave1:ip=127.0.0.1,port=6481,state=online,offset=845,lag=0


2. 搭建3個Sentinel節點,初始配置文件如下(3個節點分別對應26479,26480和26481端口):

port 26479

daemonize yes

loglevel notice

dir "/home/redis/stayfoolish/26479/data"

logfile "/home/redis/stayfoolish/26479/log/sentinel.log"

pidfile "/home/redis/stayfoolish/26479/log/sentinel.pid"

unixsocket "/home/redis/stayfoolish/26479/log/sentinel.sock"


# sfmaster

sentinel monitor sfmaster 127.0.0.1 6479 2

sentinel auth-pass sfmaster abcdefg

sentinel down-after-milliseconds sfmaster 30000

sentinel parallel-syncs sfmaster 1

sentinel failover-timeout sfmaster 180000


啟動Sentinel節點,查看信息,可見其找到了主節點,發現了2個從節點,也發現了一共3個Sentinel節點。

127.0.0.1:26479> info sentinel

# Sentinel

sentinel_masters:1

sentinel_tilt:0

sentinel_running_scripts:0

sentinel_scripts_queue_length:0

sentinel_simulate_failure_flags:0

master0:name=sfmaster,status=ok,address=127.0.0.1:6479,slaves=2,sentinels=3


至此Redis Sentinel已經搭建起來了,有了Redis復制的基礎,該過程還比較容易。



下面kill -9殺掉6479主節點,模擬故障,通過日誌查看下故障轉移的過程。


1. 殺掉6479主節點

$ ps -ef | egrep 'redis-server.*6479' | egrep -v 'egrep' | awk '{print $2}' | xargs kill -9


127.0.0.1:6479> info replication

Could not connect to Redis at 127.0.0.1:6479: Connection refused

not connected>


2. 看下Redis節點6480端口的日誌,顯示了無法連接6479端口,被Sentinel節點提升為新主節點,和響應6481端口復制請求的過程。

~/stayfoolish/6480/log $ tail -f redis.log

20047:S 22 Jul 03:03:22.946 # Error condition on socket for SYNC: Connection refused

20047:S 22 Jul 03:03:23.954 * Connecting to MASTER 127.0.0.1:6479

20047:S 22 Jul 03:03:23.955 * MASTER <-> SLAVE sync started

20047:S 22 Jul 03:03:23.955 # Error condition on socket for SYNC: Connection refused

...

20047:S 22 Jul 03:03:38.061 * MASTER <-> SLAVE sync started

20047:S 22 Jul 03:03:38.061 # Error condition on socket for SYNC: Connection refused

20047:M 22 Jul 03:03:38.963 * Discarding previously cached master state.

20047:M 22 Jul 03:03:38.963 * MASTER MODE enabled (user request from 'id=27 addr=127.0.0.1:37972 fd=10 name=sentinel-68102904-cmd age=882 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=0 qbuf-free=32768 obl=36 oll=0 omem=0 events=r cmd=exec')

20047:M 22 Jul 03:03:38.963 # CONFIG REWRITE executed with success.

20047:M 22 Jul 03:03:40.075 * Slave 127.0.0.1:6481 asks for synchronization

20047:M 22 Jul 03:03:40.076 * Full resync requested by slave 127.0.0.1:6481

20047:M 22 Jul 03:03:40.077 * Starting BGSAVE for SYNC with target: disk

20047:M 22 Jul 03:03:40.077 * Background saving started by pid 20452

20452:C 22 Jul 03:03:40.086 * DB saved on disk

20452:C 22 Jul 03:03:40.086 * RDB: 0 MB of memory used by copy-on-write

20047:M 22 Jul 03:03:40.175 * Background saving terminated with success

20047:M 22 Jul 03:03:40.176 * Synchronization with slave 127.0.0.1:6481 succeeded


看下6481端口的日誌,顯示了無法連接6479端口,接到Sentinel節點的命令,復制新的主節點的過程。

~/stayfoolish/6481/log $ tail -f redis.log

20051:S 22 Jul 03:03:08.590 # Connection with master lost.

20051:S 22 Jul 03:03:08.590 * Caching the disconnected master state.

20051:S 22 Jul 03:03:08.844 * Connecting to MASTER 127.0.0.1:6479

20051:S 22 Jul 03:03:08.844 * MASTER <-> SLAVE sync started

20051:S 22 Jul 03:03:08.844 # Error condition on socket for SYNC: Connection refused

...

20051:S 22 Jul 03:03:39.067 # Error condition on socket for SYNC: Connection refused

20051:S 22 Jul 03:03:39.342 * Discarding previously cached master state.

20051:S 22 Jul 03:03:39.342 * SLAVE OF 127.0.0.1:6480 enabled (user request from 'id=27 addr=127.0.0.1:38660 fd=10 name=sentinel-68102904-cmd age=883 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=133 qbuf-free=32635 obl=36 oll=0 omem=0 events=r cmd=exec')

20051:S 22 Jul 03:03:39.343 # CONFIG REWRITE executed with success.

20051:S 22 Jul 03:03:40.074 * Connecting to MASTER 127.0.0.1:6480

20051:S 22 Jul 03:03:40.074 * MASTER <-> SLAVE sync started

20051:S 22 Jul 03:03:40.074 * Non blocking connect for SYNC fired the event.

20051:S 22 Jul 03:03:40.074 * Master replied to PING, replication can continue...

20051:S 22 Jul 03:03:40.075 * Partial resynchronization not possible (no cached master)

20051:S 22 Jul 03:03:40.084 * Full resync from master: 84b623afc0824be14bb9187245ff00cab43427c1:1

20051:S 22 Jul 03:03:40.176 * MASTER <-> SLAVE sync: receiving 77 bytes from master

20051:S 22 Jul 03:03:40.176 * MASTER <-> SLAVE sync: Flushing old data

20051:S 22 Jul 03:03:40.176 * MASTER <-> SLAVE sync: Loading DB in memory

20051:S 22 Jul 03:03:40.176 * MASTER <-> SLAVE sync: Finished with success


3. 看下Sentinel節點26479,26480和26481端口的日誌,顯示了Sentinel節點如何配合完成故障轉移的(背後的原理下篇再說)。

~/stayfoolish/26479/log $ tail -f sentinel.log

20169:X 22 Jul 03:03:38.720 # +sdown master sfmaster 127.0.0.1 6479

20169:X 22 Jul 03:03:38.742 # +new-epoch 1

20169:X 22 Jul 03:03:38.743 # +vote-for-leader 68102904daa4df70bf945677f62498bbdffee1d4 1

20169:X 22 Jul 03:03:38.778 # +odown master sfmaster 127.0.0.1 6479 #quorum 3/2

20169:X 22 Jul 03:03:38.779 # Next failover delay: I will not start a failover before Sun Jul 22 03:09:39 2018

20169:X 22 Jul 03:03:39.346 # +config-update-from sentinel 68102904daa4df70bf945677f62498bbdffee1d4 127.0.0.1 26481 @ sfmaster 127.0.0.1 6479

20169:X 22 Jul 03:03:39.346 # +switch-master sfmaster 127.0.0.1 6479 127.0.0.1 6480

20169:X 22 Jul 03:03:39.346 * +slave slave 127.0.0.1:6481 127.0.0.1 6481 @ sfmaster 127.0.0.1 6480

20169:X 22 Jul 03:03:39.346 * +slave slave 127.0.0.1:6479 127.0.0.1 6479 @ sfmaster 127.0.0.1 6480

20169:X 22 Jul 03:04:09.393 # +sdown slave 127.0.0.1:6479 127.0.0.1 6479 @ sfmaster 127.0.0.1 6480


~/stayfoolish/26480/log $ tail -f sentinel.log

20171:X 22 Jul 03:03:38.665 # +sdown master sfmaster 127.0.0.1 6479

20171:X 22 Jul 03:03:38.741 # +new-epoch 1

20171:X 22 Jul 03:03:38.742 # +vote-for-leader 68102904daa4df70bf945677f62498bbdffee1d4 1

20171:X 22 Jul 03:03:39.343 # +config-update-from sentinel 68102904daa4df70bf945677f62498bbdffee1d4 127.0.0.1 26481 @ sfmaster 127.0.0.1 6479

20171:X 22 Jul 03:03:39.344 # +switch-master sfmaster 127.0.0.1 6479 127.0.0.1 6480

20171:X 22 Jul 03:03:39.344 * +slave slave 127.0.0.1:6481 127.0.0.1 6481 @ sfmaster 127.0.0.1 6480

20171:X 22 Jul 03:03:39.344 * +slave slave 127.0.0.1:6479 127.0.0.1 6479 @ sfmaster 127.0.0.1 6480

20171:X 22 Jul 03:04:09.379 # +sdown slave 127.0.0.1:6479 127.0.0.1 6479 @ sfmaster 127.0.0.1 6480


~/stayfoolish/26481/log $ tail -f sentinel.log

20177:X 22 Jul 03:03:38.671 # +sdown master sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:38.730 # +odown master sfmaster 127.0.0.1 6479 #quorum 2/2

20177:X 22 Jul 03:03:38.730 # +new-epoch 1

20177:X 22 Jul 03:03:38.730 # +try-failover master sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:38.731 # +vote-for-leader 68102904daa4df70bf945677f62498bbdffee1d4 1

20177:X 22 Jul 03:03:38.742 # 88fc1c8a5cdb41f3f92ed8e83e92e11b244b6e1a voted for 68102904daa4df70bf945677f62498bbdffee1d4 1

20177:X 22 Jul 03:03:38.744 # fc2182cf6c2cc8ae88dbe4bec35f1cdd9e9b8d65 voted for 68102904daa4df70bf945677f62498bbdffee1d4 1

20177:X 22 Jul 03:03:38.815 # +elected-leader master sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:38.815 # +failover-state-select-slave master sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:38.871 # +selected-slave slave 127.0.0.1:6480 127.0.0.1 6480 @ sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:38.871 * +failover-state-send-slaveof-noone slave 127.0.0.1:6480 127.0.0.1 6480 @ sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:38.962 * +failover-state-wait-promotion slave 127.0.0.1:6480 127.0.0.1 6480 @ sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:39.269 # +promoted-slave slave 127.0.0.1:6480 127.0.0.1 6480 @ sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:39.269 # +failover-state-reconf-slaves master sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:39.342 * +slave-reconf-sent slave 127.0.0.1:6481 127.0.0.1 6481 @ sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:39.859 # -odown master sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:40.335 * +slave-reconf-inprog slave 127.0.0.1:6481 127.0.0.1 6481 @ sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:40.335 * +slave-reconf-done slave 127.0.0.1:6481 127.0.0.1 6481 @ sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:40.410 # +failover-end master sfmaster 127.0.0.1 6479

20177:X 22 Jul 03:03:40.410 # +switch-master sfmaster 127.0.0.1 6479 127.0.0.1 6480

20177:X 22 Jul 03:03:40.411 * +slave slave 127.0.0.1:6481 127.0.0.1 6481 @ sfmaster 127.0.0.1 6480

20177:X 22 Jul 03:03:40.411 * +slave slave 127.0.0.1:6479 127.0.0.1 6479 @ sfmaster 127.0.0.1 6480

20177:X 22 Jul 03:04:10.501 # +sdown slave 127.0.0.1:6479 127.0.0.1 6479 @ sfmaster 127.0.0.1 6480


4. 啟動6479端口,查看Sentinel節點26481端口的日誌,顯示了復制關系指向6480端口的結果。

~/stayfoolish/26481/log $ tail -f sentinel.log

20177:X 22 Jul 03:33:36.960 # -sdown slave 127.0.0.1:6479 127.0.0.1 6479 @ sfmaster 127.0.0.1 6480

20177:X 22 Jul 03:33:46.959 * +convert-to-slave slave 127.0.0.1:6479 127.0.0.1 6479 @ sfmaster 127.0.0.1 6480


5. 查看下新的復制關系。

127.0.0.1:6480> info replication

# Replication

role:master

connected_slaves:2

slave0:ip=127.0.0.1,port=6481,state=online,offset=405522,lag=0

slave1:ip=127.0.0.1,port=6479,state=online,offset=405389,lag=0


127.0.0.1:26479> info sentinel

# Sentinel

sentinel_masters:1

...

master0:name=sfmaster,status=ok,address=127.0.0.1:6480,slaves=2,sentinels=3



熟悉Sentinel API


Sentinel節點是一個特殊的Redis節點,可以執行少數的命令,有自己專屬的API,下面重點看幾個。

1. sentinel get-master-addr-by-name <master name>返回指定<master name>主節點的IP和端口。

127.0.0.1:26479> sentinel get-master-addr-by-name sfmaster

1) "127.0.0.1"

2) "6480"


2. sentinel failover <master name>對指定的<master name>主節點進行強制故障轉移(沒有和其它Sentinel節點協商),當故障轉移完成後,其它Sentinel節點按照故障轉移的結果更新自身配置。

127.0.0.1:26479> sentinel failover sfmaster

OK


127.0.0.1:26479> info sentinel

# Sentinel

sentinel_masters:1

...

master0:name=sfmaster,status=ok,address=127.0.0.1:6481,slaves=2,sentinels=3


3. sentinel remove <master name>取消當前Sentinel節點對於指定<master name>主節點的監控,但該命令僅對當前Sentinel節點有效。

127.0.0.1:26479> sentinel remove sfmaster

OK


127.0.0.1:26479> info sentinel

# Sentinel

sentinel_masters:0

sentinel_tilt:0

sentinel_running_scripts:0

sentinel_scripts_queue_length:0

sentinel_simulate_failure_flags:0


4. sentinel monitor <master name> <ip> <port> <quorum>添加對主節點的監控。執行下面的命令完成對主節點6481端口的添加,通過info sentinel查看信息。

127.0.0.1:26479> sentinel monitor sfmaster 127.0.0.1 6481 2

OK


127.0.0.1:26479> info sentinel

# Sentinel

sentinel_masters:1

...

sentinel_simulate_failure_flags:0

master0:name=sfmaster,status=ok,address=127.0.0.1:6481,slaves=0,sentinels=3


發現slaves=0,應該slaves=2才對,為什麽呢...


原來,sentinel remove移除主節點時,會將Redis節點的配置在該Sentinel節點上刪除,其中包括了一條認證配置sentinel auth-pass sfmaster abcdefg,但sentinel monitor在添加剛移除的主節點時,並不會添加該條認證配置(小瑕疵)。手動添加,重啟26479端口的Sentinel節點,可看到正常了。

127.0.0.1:26479> info sentinel

# Sentinel

sentinel_masters:1

...

master0:name=sfmaster,status=ok,address=127.0.0.1:6481,slaves=2,sentinels=3


若感興趣可關註訂閱號”數據庫最佳實踐”(DBBestPractice).

技術分享圖片

Redis的高可用(使用篇)