搭建高可用MongoDB叢集(分片)
搭建高可用MongoDB叢集(分片)
KaliArch關注1人評論28269人閱讀2017-12-04 21:57:41MongoDB基礎請參考:https://blog.51cto.com/kaliarch/2044423
MongoDB(replica set)請參考:https://blog.51cto.com/kaliarch/2044618
一、概述
1.1 背景
為解決mongodb在replica set每個從節點上面的資料庫均是對資料庫的全量拷貝,從節點壓力在高併發大資料量的場景下存在很大挑戰,同時考慮到後期mongodb叢集的在資料壓力巨大時的擴充套件性,應對海量資料引出了分片機制。
1.2 分片概念
分片是將資料庫進行拆分,將其分散在不同的機器上的過程,無需功能強大的伺服器就可以儲存更多的資料,處理更大的負載,在總資料中,將集合切成小塊,將這些塊分散到若干片中,每個片只負載總資料的一部分,通過一個知道資料與片對應關係的元件mongos的路由程序進行操作。
1.3 基礎元件
其利用到了四個元件:mongos,config server,shard,replica set
mongos:資料庫叢集請求的入口,所有請求需要經過mongos進行協調,無需在應用層面利用程式來進行路由選擇,mongos其自身是一個請求分發中心,負責將外部的請求分發到對應的shard伺服器上,mongos作為統一的請求入口,為防止mongos單節點故障,一般需要對其做HA。
config server:配置伺服器,儲存所有資料庫元資料(分片,路由)的配置。mongos本身沒有物理儲存分片伺服器和資料路由資訊,只是存快取在記憶體中來讀取資料,mongos在第一次啟動或後期重啟時候,就會從config server中載入配置資訊,如果配置伺服器資訊發生更新會通知所有的mongos來更新自己的狀態,從而保證準確的請求路由,生產環境中通常也需要多個config server,防止配置檔案存在單節點丟失問題。
shard:在傳統意義上來講,如果存在海量資料,單臺伺服器儲存1T壓力非常大,無論考慮資料庫的硬碟,網路IO,又有CPU,記憶體的瓶頸,如果多臺進行分攤1T的資料,到每臺上就是可估量的較小資料,在mongodb叢集只要設定好分片規則,通過mongos操作資料庫,就可以自動把對應的操作請求轉發到對應的後端分片伺服器上。
replica set:在總體mongodb叢集架構中,對應的分片節點,如果單臺機器下線,對應整個叢集的資料就會出現部分缺失,這是不能發生的,因此對於shard節點需要replica set來保證資料的可靠性,生產環境通常為2個副本+1個仲裁。
1.4 架構圖
二、安裝部署
2.1 基礎環境
為了節省伺服器,採用多例項配置,三個mongos,三個config server,單個伺服器上面執行不通角色的shard(為了後期資料分片均勻,將三臺shard在各個伺服器上充當不同的角色。),在一個節點內採用replica set保證高可用,對應主機與埠資訊如下:
主機名 | IP地址 | 元件mongos | 元件config server | shard |
mongodb-1 |
172.20.6.10 |
埠:20000 |
埠:21000 |
主節點: 22001 |
副本節點:22002 | ||||
仲裁節點:22003 | ||||
mongodb-2 |
172.20.6.11 |
埠:20000 |
埠:21000 |
仲裁節點:22001 |
主節點: 22002 | ||||
副本節點:22003 | ||||
mongodb-3 |
172.20.6.12 |
埠:20000 |
埠:21000 |
副本節點:22001 |
仲裁節點:22002 | ||||
主節點: 22003 |
2.2、安裝部署
2.2.1 軟體下載目錄建立
wget -c https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel62-3.4.10.tgz
tar -zxvf mongodb-linux-x86_64-rhel62-3.4.10.tgz
ln -sv mongodb-linux-x86_64-rhel62-3.4.10 mongodb
echo "PATH=$PAHT:/usr/local/mongodb/bin">/etc/profile.d/mongodb.sh
source /etc/profile.d/mongodb.sh
2.2.2 建立目錄
分別在mongodb-1/mongodb-2/mongodb-3建立目錄及日誌檔案
mkdir -p /data/mongodb/mongos/{log,conf}
mkdir -p /data/mongodb/mongoconf/{data,log,conf}
mkdir -p /data/mongodb/shard1/{data,log,conf}
mkdir -p /data/mongodb/shard2/{data,log,conf}
mkdir -p /data/mongodb/shard3/{data,log,conf}
touch /data/mongodb/mongos/log/mongos.log
touch /data/mongodb/mongoconf/log/mongoconf.log
touch /data/mongodb/shard1/log/shard1.log
touch /data/mongodb/shard2/log/shard2.log
touch /data/mongodb/shard3/log/shard3.log
2.2.3 配置config server副本集
在mongodb3.4版本後要求配置伺服器也建立為副本集,在此副本集名稱:replconf
在三臺伺服器上配置config server副本集配置檔案,並啟動服務
cat>/data/mongodb/mongoconf/conf/mongoconf.conf<<EOFdbpath=/data/mongodb/mongoconf/data
logpath=/data/mongodb/mongoconf/log/mongoconf.log #定義config server日誌檔案
logappend=true
port = 21000
maxConns = 1000 #連結數
journal = true #日誌開啟
journalCommitInterval = 200
fork = true #後臺執行
syncdelay = 60
oplogSize = 1000
configsvr = true #配置伺服器
replSet=replconf #config server配置集replconf
EOF
mongod -f /data/mongodb/mongoconf/conf/mongoconf.conf #三臺伺服器均啟動config server
任意登入一臺伺服器進行配置伺服器副本集初始化
use admin
config = {_id:"replconf",members:[
{_id:0,host:"172.20.6.10:21000"},
{_id:1,host:"172.20.6.11:21000"},
{_id:2,host:"172.20.6.12:21000"},]
}
rs.initiate(config);
檢視叢集狀態:
replconf:OTHER> rs.status()
{
"set" : "replconf",
"date" : ISODate("2017-12-04T07:42:09.054Z"),
"myState" : 1,
"term" : NumberLong(1),
"configsvr" : true,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1512373328, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1512373328, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1512373328, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1512373328, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "172.20.6.10:21000",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 221,
"optime" : {
"ts" : Timestamp(1512373328, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-12-04T07:42:08Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1512373296, 1),
"electionDate" : ISODate("2017-12-04T07:41:36Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "172.20.6.11:21000",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 42,
"optime" : {
"ts" : Timestamp(1512373318, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1512373318, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-12-04T07:41:58Z"),
"optimeDurableDate" : ISODate("2017-12-04T07:41:58Z"),
"lastHeartbeat" : ISODate("2017-12-04T07:42:08.637Z"),
"lastHeartbeatRecv" : ISODate("2017-12-04T07:42:07.648Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "172.20.6.10:21000",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.20.6.12:21000",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 42,
"optime" : {
"ts" : Timestamp(1512373318, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1512373318, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-12-04T07:41:58Z"),
"optimeDurableDate" : ISODate("2017-12-04T07:41:58Z"),
"lastHeartbeat" : ISODate("2017-12-04T07:42:08.637Z"),
"lastHeartbeatRecv" : ISODate("2017-12-04T07:42:07.642Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "172.20.6.10:21000",
"configVersion" : 1
}
],
"ok" : 1
}
此時config server叢集已經配置完成,mongodb-1為primary,mongdb-2/mongodb-3為secondary
2.2.4 配置shard叢集
三臺伺服器均進行shard叢集配置
shard1配置
cat >/data/mongodb/shard1/conf/shard.conf <<EOF
dbpath=/data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
port = 22001
logappend = true
nohttpinterface = true
fork = true
oplogSize = 4096
journal = true
#engine = wiredTiger
#cacheSizeGB = 38G
smallfiles=true
shardsvr=true #shard伺服器
replSet=shard1 #副本集名稱shard1
EOF
mongod -f /data/mongodb/shard1/conf/shard.conf #啟動shard服務
檢視此時服務已經正常啟動,shard1的22001埠已經正常監聽,接下來登入mongodb-1伺服器進行shard1副本集初始化
mongo 172.20.6.10:22001
use admin
config = {_id:"shard1",members:[
{_id:0,host:"172.20.6.10:22001"},
{_id:1,host:"172.20.6.11:22001",arbiterOnly:true},
{_id:2,host:"172.20.6.12:22001"},]
}
rs.initiate(config);
檢視叢集狀態,只列出了部分資訊:
{
"_id" : 0,
"name" : "172.20.6.10:22001",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", #mongodb-1為primary
"uptime" : 276,
"optime" : {
"ts" : Timestamp(1512373911, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-12-04T07:51:51Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1512373879, 1),
"electionDate" : ISODate("2017-12-04T07:51:19Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "172.20.6.11:22001",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER", #mongodb-2為arbiter
"uptime" : 45,
"lastHeartbeat" : ISODate("2017-12-04T07:51:53.597Z"),
"lastHeartbeatRecv" : ISODate("2017-12-04T07:51:51.243Z"),
"pingMs" : NumberLong(0),
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.20.6.12:22001",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY", #mongodb-3為secondary
"uptime" : 45,
"optime" : {
"ts" : Timestamp(1512373911, 1),
"t" : NumberLong(1)
},
此時shard1 副本集已經配置完成,mongodb-1為primary,mongodb-2為arbiter,mongodb-3為secondary。
同樣的操作進行shard2配置和shard3配置
注意:進行shard2的副本集初始化,在mongodb-2, 初始化shard3副本集在mongodb-3上進行操作。
shard2配置檔案
cat >/data/mongodb/shard2/conf/shard.conf <<EOF
dbpath=/data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
port = 22002
logappend = true
nohttpinterface = true
fork = true
oplogSize = 4096
journal = true
#engine = wiredTiger
#cacheSizeGB = 38G
smallfiles=true
shardsvr=true
replSet=shard2
EOF
mongod -f /data/mongodb/shard2/conf/shard.conf
shard3配置檔案
cat >/data/mongodb/shard3/conf/shard.conf <<EOF
dbpath=/data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
port = 22003
logappend = true
nohttpinterface = true
fork = true
oplogSize = 4096
journal = true
#engine = wiredTiger
#cacheSizeGB = 38G
smallfiles=true
shardsvr=true
replSet=shard3
EOF
mongod -f /data/mongodb/shard2/conf/shard.conf
在mongodb-2上進行shard2副本集初始化
mongo 172.20.6.11:22002 #登入mongodb-2
use admin
config = {_id:"shard2",members:[
{_id:0,host:"172.20.6.10:22002"},
{_id:1,host:"172.20.6.11:22002"},
{_id:2,host:"172.20.6.12:22002",arbiterOnly:true},]
}
rs.initiate(config);
檢視shard2副本集狀態
{
"_id" : 0,
"name" : "172.20.6.10:22002",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY", #mongodb-2為secondary
"uptime" : 15,
"optime" : {
"ts" : Timestamp(1512374668, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1512374668, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-12-04T08:04:28Z"),
"optimeDurableDate" : ISODate("2017-12-04T08:04:28Z"),
"lastHeartbeat" : ISODate("2017-12-04T08:04:30.527Z"),
"lastHeartbeatRecv" : ISODate("2017-12-04T08:04:28.492Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "172.20.6.11:22002",
"configVersion" : 1
},
{
"_id" : 1,
"name" : "172.20.6.11:22002",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", #mongodb-2為primary
"uptime" : 211,
"optime" : {
"ts" : Timestamp(1512374668, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-12-04T08:04:28Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1512374666, 1),
"electionDate" : ISODate("2017-12-04T08:04:26Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 2,
"name" : "172.20.6.12:22002", #mongodb-3為arbiter
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 15,
"lastHeartbeat" : ISODate("2017-12-04T08:04:30.527Z"),
"lastHeartbeatRecv" : ISODate("2017-12-04T08:04:28.384Z"),
"pingMs" : NumberLong(0),
"configVersion" : 1
}
登入mongodb-3進行shard3副本集初始化
mongo 172.20.6.12:22003 #登入mongodb-3
use admin
config = {_id:"shard3",members:[
{_id:0,host:"172.20.6.10:22003",arbiterOnly:true},
{_id:1,host:"172.20.6.11:22003"},
{_id:2,host:"172.20.6.12:22003"},]
}
rs.initiate(config);
檢視shard3副本集狀態
{
"_id" : 0,
"name" : "172.20.6.10:22003",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER", #mongodb-1為arbiter
"uptime" : 18,
"lastHeartbeat" : ISODate("2017-12-04T08:07:37.488Z"),
"lastHeartbeatRecv" : ISODate("2017-12-04T08:07:36.224Z"),
"pingMs" : NumberLong(0),
"configVersion" : 1
},
{
"_id" : 1,
"name" : "172.20.6.11:22003",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY", #mongodb-2為secondary
"uptime" : 18,
"optime" : {
"ts" : Timestamp(1512374851, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1512374851, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-12-04T08:07:31Z"),
"optimeDurableDate" : ISODate("2017-12-04T08:07:31Z"),
"lastHeartbeat" : ISODate("2017-12-04T08:07:37.488Z"),
"lastHeartbeatRecv" : ISODate("2017-12-04T08:07:36.297Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "172.20.6.12:22003",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "172.20.6.12:22003",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY", #mongodb-3為primary
"uptime" : 380,
"optime" : {
"ts" : Timestamp(1512374851, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2017-12-04T08:07:31Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1512374849, 1),
"electionDate" : ISODate("2017-12-04T08:07:29Z"),
"configVersion" : 1,
"self" : true
}
此時shard叢集全部已經配置完畢。
2.2.5 配置路由伺服器mongos
目前三臺伺服器的配置伺服器和分片伺服器均已啟動,配置三臺mongos伺服器
由於mongos伺服器的配置是從記憶體中載入,所以自己沒有存在資料目錄configdb連線為配置伺服器叢集
cat >/data/mongodb/mongos/conf/mongos.conf<<EOF
--logpath=/data/mongodb/mongos/log/mongos.log
logappend=true
port = 20000
maxConns = 1000
configdb=replconf/172.20.6.10:21000,172.20.6.11:21000,172.20.6.12:21000 #制定config server叢集
fork = true
EOF
mongos -f /data/mongodb/mongos/conf/mongos.conf #啟動mongos服務
目前config server叢集/shard叢集/mongos服務均已啟動,但此時為設定分片,還不能使用分片功能。需要登入mongos啟用分片
登入任意一臺mongos
mongo 172.20.6.10:20000
use admin
db.runCommand({addshard:"shard1/172.20.6.10:22001,172.20.6.11:22001,172.20.6.12:22001"})
db.runCommand({addshard:"shard2/172.20.6.10:22002,172.20.6.11:22002,172.20.6.12:22002"})
db.runCommand({addshard:"shard3/172.20.6.10:22003,172.20.6.11:22003,172.20.6.12:22003"})
檢視叢集
三、 測試
目前配置服務、路由服務、分片服務、副本集服務都已經串聯起來了,此時進行資料插入,資料能夠自動分片。連線在mongos上讓指定的資料庫、指定的集合分片生效。
注意:設定分片需要在admin資料庫進行