1. 程式人生 > >hadoop安裝zookeeper-3.4.12

hadoop安裝zookeeper-3.4.12

在安裝hbase的時候,需要安裝zookeeper,當然也可以用hbase自己管理的zookeeper,在這裡我們獨立安裝zookeeper-3.4.12。

下載地址:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.12/

zookeeper-3.4.12.tar.gz 上傳到主節點上的目錄下:/home/hadoop

解壓tar -xvf zookeeper-3.4.12.tar.gz

重新命名:mv zookeeper-3.4.12/ zookeeper

建立在主目錄zookeeper下建立data和logs兩個目錄用於儲存資料和日誌:

[[email protected] zookeeper]$ mkdir data
[[email protected] zookeeper]$ mkdir logs

修改配置檔案:

cp  zoo_sample.cfg   zoo.cfg

vi zoo.cfg     #修改內容已經用紅色字型標出

[[email protected] conf]$ more zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir
=/home/hadoop/zookeeper/data dataLogDir=/home/hadoop/zookeeper/logs # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.1=master:2888:3888 server.2=saver1:2888:3888 server.3=saver2:2888:3888

配置檔案簡單說明

============ zookeeper配置檔案說明 ============
在ZooKeeper的設計中,叢集中所有機器上zoo.cfg檔案的內容都應該是一致的。

tickTime: 伺服器與客戶端之間互動的基本時間單元(ms)
initLimit : 此配置表示允許follower連線並同步到leader的初始化時間,它以tickTime的倍數來表示。當超過設定倍數的tickTime時間,則連線失敗。
syncLimit : Leader伺服器與follower伺服器之間資訊同步允許的最大時間間隔,如果超過次間隔,預設follower伺服器與leader伺服器之間斷開連結。
dataDir: 儲存zookeeper資料路徑
dataLogDir:儲存zookeeper日誌路徑,當此配置不存在時預設路徑與dataDir一致

clientPort(2181) : 客戶端與zookeeper相互互動的埠

server.id=host:port:port
 id代表這是第幾號伺服器
 在伺服器的data(dataDir引數所指定的目錄)下建立一個檔名為myid的檔案,
 這個檔案的內容只有一行,指定的是自身的id值。比如,伺服器“1”應該在myid檔案中寫入“1”。
 這個id必須在叢集環境中伺服器標識中是唯一的,且大小在1~255之間。
 host代表伺服器的IP地址
 第一個埠號(2888)是follower伺服器與叢集中的“領導者”leader機器交換資訊的埠
 第二個埠號(3888)是當領導者失效後,用來執行選舉leader時伺服器相互通訊的埠

maxClientCnxns : 限制連線到zookeeper伺服器客戶端的數量

 

修改/etc/profile

export ZOOKEEPER_HOME=/home/hadoop/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf

使配置檔案生效:

source /etc/profile

在master機器上:/home/hadoop/zookeeper/data目錄下建立一個檔案:myid 

[[email protected] data]$ more myid
1

在saver1機器上建立myid檔案

[[email protected] data]$ more myid
2

在saver2機器上建立myid檔案

[[email protected] data]$ more myid
3

以上三個mydi檔案內容和zoo.cfg  最後的server.後對應著

然後在三臺伺服器上都啟用

zkServer.sh start

 

[[email protected] ~]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

 

可以觀察到:

master

[[email protected] ~]$ jps
17888 NameNode
18022 DataNode
18454 NodeManager
18807 Jps
18345 ResourceManager
18782 QuorumPeerMain
18191 SecondaryNameNode
[[email protected] ~]$
[[email protected] ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
Mode: followe

saver1

[[email protected] ~]$ jps
9010 Jps
8979 QuorumPeerMain
8727 DataNode
8839 NodeManager
[[email protected] ~]$ 
[[email protected] ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
Mode: leade

saver2

[[email protected] ~]$ jps
8738 NodeManager
8626 DataNode
8903 Jps
8878 QuorumPeerMain
[[email protected] ~]$
[[email protected] ~]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper/bin/../conf/zoo.cfg
Mode: followe

然後啟動Hbase

[[email protected] ~]$ start-hbase.sh
running master, logging to /home/hadoop/hbase/logs/hbase-hadoop-master-master.out
saver1: running regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-saver1.out
saver2: running regionserver, logging to /home/hadoop/hbase/bin/../logs/hbase-hadoop-regionserver-saver2.out
[[email protected] ~]$ jps
19168 Jps
17888 NameNode
18022 DataNode
18454 NodeManager
18983 HMaster
18345 ResourceManager
18782 QuorumPeerMain
18191 SecondaryNameNode

 

 

啟動和停止

zkServer.sh start
zkServer.sh stop
zkServer.sh restart
zkServer.sh status

 

前臺顯示:

主節點:

 saver節點

完。