mongodb replica set 配置高性能多服務器詳解
mongodb的多服務器配置,以前寫過一篇文章,是master-slave模式的,請參考:詳解mongodb 主從配置。master-slave模式,不能自動實現故障轉移和恢復。所以推薦大家使用mongodb的replica set,來實現多服務器的高可用。給我的感覺是replica set好像自帶了heartbeat功能,挺強大的。
一,三臺服務器,1主,2從
服務器1:127.0.0.1:27017
服務器2:127.0.0.1:27018
服務器3:127.0.0.1:27019
1,創建數據庫目錄
?1 |
[root@localhost ~] # mkdir /var/lib/{mongodb_2,mongodb_3} |
在一臺機子上面模擬,三臺服務器,所以把DB目錄分開了。
2,創建配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
[root@localhost ~]# cat /etc/mongodb.conf |awk ‘{if($0 !~ /^$/ && $0 !~ /^#/) {print $0}}‘ //主服務器配置 port = 27017 //監聽端口
fork = true //後臺運行
pidfilepath = /var/run/mongodb/mongodb.pid //進程PID文件
logpath = /var/log/mongodb/mongodb.log //日誌文件
dbpath =/var/lib/mongodb //db存放目錄
journal = true //存儲模式
nohttpinterface = true //禁用http
directoryperdb=true //一個數據庫一個文件夾
logappend=true //追加方式寫日誌
replSet=repmore //集群名稱,自定義 oplogSize=1000 //oplog大小
[root@localhost ~]# cat /etc/mongodb_2.conf |awk ‘{if($0 !~ /^$/ && $0 !~ /^#/) {print $0}}‘ //從服務器
port = 27018
fork = true
pidfilepath = /var/run/mongodb/mongodb_2.pid
logpath = /var/log/mongodb/mongodb_2.log
dbpath =/var/lib/mongodb_2
journal = true
nohttpinterface = true
directoryperdb=true
logappend=true
replSet=repmore
oplogSize=1000
[root@localhost ~]# cat /etc/mongodb_3.conf |awk ‘{if($0 !~ /^$/ && $0 !~ /^#/) {print $0}}‘ //從服務器
port = 27019
fork = true
pidfilepath = /var/run/mongodb/mongodb_3.pid
logpath = /var/log/mongodb/mongodb_3.log
dbpath =/var/lib/mongodb_3
journal = true
nohttpinterface = true
oplogSize = 1000
directoryperdb=true
logappend=true
replSet=repmore
|
在這裏要註意一點,不要把認證開起來了,不然查看rs.status();時,主從服務器間,無法連接,"lastHeartbeatMessage" : "initial sync couldn‘t connect to 127.0.0.1:27017"
3,啟動三臺服務器
?1 2 3 |
mongod -f /etc/mongodb .conf
mongod -f /etc/mongodb_2 .conf
mongod -f /etc/mongodb_3 .conf
|
註意:初次啟動時,主服務器比較快的,從服務器有點慢。
二,配置並初始化replica set
1,配置replica set節點
1 |
> config = {_id: "repmore" ,members:[{_id:0,host: ‘127.0.0.1:27017‘ ,priority :2},{_id:1,host: ‘127.0.0.1:27018‘ ,priority:1},{_id:2,host: ‘127.0.0.1:27019‘ ,priority:1}]}
|
2,初始化replica set
?1 2 3 4 5 |
> rs.initiate(config);
{
"info" : "Config now saved locally. Should come online in about a minute." ,
"ok" : 1
}
|
3,查看replica set各節點狀態
?1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
repmore:PRIMARY> rs.status();
{
"set" : "repmore" ,
"date" : ISODate( "2013-12-16T21:01:51Z" ),
"myState" : 2,
"syncingTo" : "127.0.0.1:27017" ,
"members" : [
{
"_id" : 0,
"name" : "127.0.0.1:27017" ,
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY" ,
"uptime" : 33,
"optime" : Timestamp(1387227638, 1),
"optimeDate" : ISODate( "2013-12-16T21:00:38Z" ),
"lastHeartbeat" : ISODate( "2013-12-16T21:01:50Z" ),
"lastHeartbeatRecv" : ISODate( "2013-12-16T21:01:50Z" ),
"pingMs" : 0,
"syncingTo" : "127.0.0.1:27018"
},
{
"_id" : 1,
"name" : "127.0.0.1:27018" ,
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY" ,
"uptime" : 1808,
"optime" : Timestamp(1387227638, 1),
"optimeDate" : ISODate( "2013-12-16T21:00:38Z" ),
"errmsg" : "syncing to: 127.0.0.1:27017" ,
"self" : true
},
{
"_id" : 2,
"name" : "127.0.0.1:27019" ,
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY" ,
"uptime" : 1806,
"optime" : Timestamp(1387227638, 1),
"optimeDate" : ISODate( "2013-12-16T21:00:38Z" ),
"lastHeartbeat" : ISODate( "2013-12-16T21:01:50Z" ),
"lastHeartbeatRecv" : ISODate( "2013-12-16T21:01:51Z" ),
"pingMs" : 0,
"lastHeartbeatMessage" : "syncing to: 127.0.0.1:27018" ,
"syncingTo" : "127.0.0.1:27018"
}
],
"ok" : 1
}
|
在這裏要註意,rs.initiate初始化也是要一定時間的,剛執行完rs.initiate,我就查看狀態,從服務器的stateStr不是SECONDARY,而是stateStr" : "STARTUP2",等一會就好了。
三,replica set主,從測試
1,主服務器測試
1 2 3 4 5 6 7 |
repmore:PRIMARY> show dbs;
local 1.078125GB
repmore:PRIMARY> use test
switched to db test
repmore:PRIMARY> db. test .insert({ ‘name‘ : ‘tank‘ , ‘phone‘ : ‘12345678‘ });
repmore:PRIMARY> db. test . find ();
{ "_id" : ObjectId( "52af64549d2f9e75bc57cda7" ), "name" : "tank" , "phone" : "12345678" }
|
2,從服務器測試
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[root@localhost mongodb] # mongo 127.0.0.1:27018 //連接
MongoDB shell version: 2.4.6
connecting to: 127.0.0.1:27018 /test
repmore:SECONDARY> show dbs;
local 1.078125GB
test 0.203125GB
repmore:SECONDARY> db. test . find (); // 無權限查看
error: { "$err" : "not master and slaveOk=false" , "code" : 13435 }
repmore:SECONDARY> rs.slaveOk(); // 從庫開啟
repmore:SECONDARY> db. test . find (); // 從庫可看到主庫剛插入的數據
{ "_id" : ObjectId( "52af64549d2f9e75bc57cda7" ), "name" : "tank" , "phone" : "12345678" }
repmore:SECONDARY> db. test .insert({ ‘name‘ : ‘zhangying‘ , ‘phone‘ : ‘12345678‘ }); // 從庫只讀,無插入權限
not master
|
到這兒,我們的replica set就配置好了。
四,故障測試
前面我說過,mongodb replica set有故障轉移功能,下面就模擬一下,這個過程
1,故障轉移
1.1,關閉主服務器
1 2 3 4 5 6 7 8 9 10 |
[root@localhost mongodb] # ps aux |grep mongod //查看所有的mongod
root 16977 0.2 1.1 3153692 44464 ? Sl 04:31 0:02 mongod -f /etc/mongodb .conf
root 17032 0.2 1.1 3128996 43640 ? Sl 04:31 0:02 mongod -f /etc/mongodb_2 .conf
root 17092 0.2 0.9 3127976 38324 ? Sl 04:31 0:02 mongod -f /etc/mongodb_3 .conf
root 20400 0.0 0.0 103248 860 pts /2 S+ 04:47 0:00 grep mongod
[root@localhost mongodb] # kill 16977 //關閉主服務器進程
[root@localhost mongodb] # ps aux |grep mongod
root 17032 0.2 1.1 3133124 43836 ? Sl 04:31 0:02 mongod -f /etc/mongodb_2 .conf
root 17092 0.2 0.9 3127976 38404 ? Sl 04:31 0:02 mongod -f /etc/mongodb_3 .conf
root 20488 0.0 0.0 103248 860 pts /2 S+ 04:47 0:00 grep mongod
|
1.2,在主庫執行命令
?1 2 |
repmore:PRIMARY> show dbs;
Tue Dec 17 04:48:02.392 DBClientCursor::init call() failed
|
1.3,從庫查看狀態,如下圖,
replica set 故障測試
以前的從庫變主庫了,故障轉移成功
2,故障恢復
mongod -f /etc/mongodb.conf
啟動剛被關閉的主服務器,然後在登錄到主服務器,查看狀態rs.status();已恢復到最原始的狀態了。
mongodb replica set 配置高性能多服務器詳解