1. 程式人生 > >Redis集群環境搭建(實驗)

Redis集群環境搭建(實驗)

redis集群部署 redis集群搭建 redis集群配置

環境信息:

集群中至少有奇數個主節點,所以至少三個主節點,

每個節點至少一個備份節點,所以共6個節點(master和slave各3個)

節點信息:我這裏準備了3臺主機,每臺主機運行一個master和一個slave

節點1:192.168.2.100:6379 master

節點2:192.168.2.100:6380 slave

節點3:192.168.2.200:6379 master

節點4:192.168.2.200:6380 slave

節點5:192.168.2.201:6379 master

節點6:192.168.2.201:6380 slave


master和slave安裝路徑:

master:/usr/local/redis-3.0.6-6379

slave:/usr/local/redis-3.0.6-6380

master配置文件:

daemonize yes //開啟後臺運行

pidfile /var/run/redis_6379.pid //pid文件

port 6379 //端口

bind 192.168.2.100 //默認127.0.0.1,需要改為其他節點可訪問的地址

logfile "/usr/local/redis-3.0.6-6379/redis_6379.log" //log文件路徑

dir /usr/local/redis-3.0.6-6379/ //RDB文件路徑

appendonly yes //開啟AOF持久化

cluster-enabled yes //開啟集群

cluster-config-file nodes-6379.conf //集群配置文件

cluster-node-timeout 15000 //請求超時,默認15秒

slave配置文件:

daemonize yes

pidfile /var/run/redis_6380.pid

port 6380

bind 192.168.2.100

logfile "/usr/local/redis-3.0.6-6380/redis_6380.log"

dir /usr/local/redis-3.0.6-638/

appendonly yes

cluster-enabled yes

cluster-config-file nodes-6380.conf

cluster-node-timeout 15000


啟動redis

# redis-server redis.conf //6個節點


# ps -ef |grep redis

root 22584 1 0 17:41 ? 00:00:00 redis-server 192.168.2.100:6379 [cluster]

root 22599 1 0 17:41 ? 00:00:00 redis-server 192.168.2.100:6380 [cluster]

root 22606 6650 0 17:41 pts/0 00:00:00 grep --color=auto redis


安裝ruby環境:(redis-trib.rb命令,需要在ruby環境中執行

# yum -y install ruby ruby-devel rubygems rpm-build

# gem install redis

Successfully installed redis-3.2.1

Parsing documentation for redis-3.2.1

1 gem installed


可能會遇到的報錯:

ERROR: Could not find a valid gem 'redis' (>= 0), here is why:

Unable to download data from https://rubygems.org/ - no such name (https://rubygems.org/latest_specs.4.8.gz)

手動下載:https://rubygems.global.ssl.fastly.net/gems/redis-3.2.1.gem

執行:# gem install -l ./redis-3.2.1.gem

創建集群:

將redis-trib.rb復制到/usr/local/bin目錄下

# cp /usr/local/redis-3.0.6-6379/src/redis-trib.rb /usr/local/bin/

# redis-trib.rb create --replicas 1 192.168.2.100:6379 192.168.2.100:6380 192.168.2.200:6379 192.168.2.200:6380 192.168.2.201:6379 192.168.2.201:6380 //創建集群

>>> Creating cluster

>>> Performing hash slots allocation on 6 nodes...

Using 3 masters:

192.168.2.100:6379 //3個master節點

192.168.2.200:6379

192.168.2.201:6379

Adding replica 192.168.2.200:6380 to 192.168.2.100:6379 //3個slave節點

Adding replica 192.168.2.100:6380 to 192.168.2.200:6379

Adding replica 192.168.2.201:6380 to 192.168.2.201:6379

M: 098e7eb756b6047fde988ab3c0b7189e1724ecf5 192.168.2.100:6379

slots:0-5460 (5461 slots) master //hash slot [0-5460]

S: 7119dec91b086ca8fe69f7878fa42b1accd75f0f 192.168.2.100:6380

replicates 5844b4272c39456b0fdf73e384ff8c479547de47

M: 5844b4272c39456b0fdf73e384ff8c479547de47 192.168.2.200:6379

slots:5461-10922 (5462 slots) master //hash slot [5461-10922]

S: 227f51028bbe827f27b4e40ed7a08fcc7d8df969 192.168.2.200:6380

replicates 098e7eb756b6047fde988ab3c0b7189e1724ecf5

M: 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69 192.168.2.201:6379

slots:10923-16383 (5461 slots) master //hash slot [10923-16383]

S: 2faf68564a70372cfc06c1afff197019cc6a39f3 192.168.2.201:6380

replicates 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69

Can I set the above configuration? (type 'yes' to accept): yes

>>> Nodes configuration updated

>>> Assign a different config epoch to each node

>>> Sending CLUSTER MEET messages to join the cluster

Waiting for the cluster to join..

>>> Performing Cluster Check (using node 192.168.2.100:6379)

M: 098e7eb756b6047fde988ab3c0b7189e1724ecf5 192.168.2.100:6379

slots:0-5460 (5461 slots) master

M: 7119dec91b086ca8fe69f7878fa42b1accd75f0f 192.168.2.100:6380

slots: (0 slots) master

replicates 5844b4272c39456b0fdf73e384ff8c479547de47

M: 5844b4272c39456b0fdf73e384ff8c479547de47 192.168.2.200:6379

slots:5461-10922 (5462 slots) master

M: 227f51028bbe827f27b4e40ed7a08fcc7d8df969 192.168.2.200:6380

slots: (0 slots) master

replicates 098e7eb756b6047fde988ab3c0b7189e1724ecf5

M: 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69 192.168.2.201:6379

slots:10923-16383 (5461 slots) master

M: 2faf68564a70372cfc06c1afff197019cc6a39f3 192.168.2.201:6380

slots: (0 slots) master

replicates 3ff3a74f9dc41f8bc635ab845ad76bf77ffb0f69

[OK] All nodes agree about slots configuration. //所有節點都同意槽配置

>>> Check for open slots... //開始檢查槽

>>> Check slots coverage... //檢查槽的覆蓋率

[OK] All 16384 slots covered. //所有的槽(16384個槽)都被覆蓋(分配)


到此,redis集群環境部署完成。


誤區一:

集群是去中心化的結構,對於集群來說,在整個集群結構中,數據只會存兩份:一份在master上,另一份在任意一個slave上,

千萬不要理解為,salve一定備份“自己對應的master”,每個slave,都對應任意一個master。

正常來說,6個節點都應該是獨立的IP,我們將master和salve放在了一個主機上,只是為了節省實驗資源。


誤區二:

結合上面說的,

既然每個master都能在任意slave上備份,配置1個slave不行嗎?

當然不行了,3個master對應了16384個hash slot,你弄個1個slave備份數據,

最多只能備份1/3的數據。

那我增大slave的配置,這下備份空間問題解決了吧?

當然不行了,3組hash slot,通過你備份後,變成了1組,你要進行計算合並吧,首先備份速度和效率上就受到了影響。


其次,如果有一個master掛掉,你1個salve如何恢復?

第一、集群不支持這個功能。再一個,你1個salve備份,恢復時,你要進行計算拆分吧,恢復速度上也不是很好。


Redis集群環境搭建(實驗)