1. 程式人生 > >Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分散式叢集環境

Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分散式叢集環境

Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分散式叢集環境

 


一、環境說明

個人理解:
zookeeper可以獨立搭建叢集,hbase本身不能獨立搭建叢集需要和hadoop和hdfs整合

叢集環境至少需要3個節點(也就是3臺伺服器裝置):1個Master,2個Slave,節點之間區域網連線,可以相互ping通,下面舉例說明,配置節點IP分配如下:


IP     角色
10.10.50.133 master
10.10.125.156 slave1
10.10.114.112 slave2


三個節點均使用CentOS 6.5系統,為了便於維護,叢集環境配置項最好使用相同使用者名稱、使用者密碼、相同Hadoop、Hbase、zookeeper目錄結構。


注:
主機名和角色最好保持一致,如果不同也沒關係,只需要在/etc/hosts中配置好對應關係即可
可以通過編輯/etc/sysconfig/network檔案來修改 hostname


軟體包下載準備:
hadoop-2.7.3.tar.gz
hbase-1.2.5-bin.tar.gz
zookeeper-3.4.6.tar.gz
jdk-8u111-linux-x64.rpm


因為是測試環境此次都使用root來操作,如果是生產環境建議使用其他使用者如hadoop,需要給目錄授權為hadoop
chown -R hadoop.hadoop /data/yunva


二、準備工作

 

2.1 安裝JDK

 

在三臺機器上配置JDK環境,下載 jdk-8u111-linux-x64.rpm 檔案直接安裝:


# rpm -ivh jdk-8u111-linux-x64.rpm


修改配置檔案 vim /etc/profile:

export JAVA_HOME=/usr/java/jdk1.8.0_111 # 不同的jdk路徑需要修改此項
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/data/yunva/hadoop-2.7.3
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_SSH_OPTS="-p 48490" # 非預設ssh的22號埠需要新增此項,表示埠為48490


因本次部署環境,jdk不同所以需要單獨修改配置:
master
export JAVA_HOME=/usr/java/jdk1.8.0_111


slave1
export JAVA_HOME=/usr/java/jdk1.8.0_65


slave2
export JAVA_HOME=/usr/java/jdk1.8.0_102


然後重新載入配置檔案使之生效:


# source /etc/profile 


2.2 新增Hosts對映關係

 

分別在三個節點上新增hosts對映關係:


# vim /etc/hosts


新增的內容如下:


10.10.50.133 master
10.10.125.156 slave1
10.10.114.112 slave2


2.3 叢集之間SSH無密碼登陸

 

CentOS預設安裝了ssh,如果沒有你需要先安裝ssh 。


叢集環境的使用必須通過ssh無密碼登陸來執行,本機登陸本機必須無密碼登陸,主機與從機之間必須可以雙向無密碼登陸,從機與從機之間無限制。


2.3.1 設定master無密碼自動登陸slave1和slave2


主要有三步:
①生成公鑰和私鑰
②匯入公鑰到認證檔案
③更改許可權


# ssh-keygen -t rsa
# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# chmod 700 ~/.ssh && chmod 600 ~/.ssh/*


測試,第一次登入可能需要yes確認,之後就可以直接登入了:


# ssh master
# ssh slave1
# ssh slave2


對於 slave1 和 slave2,進行無密碼自登陸設定,操作同上。


也有個快捷的操作方式,當所有的伺服器都ssh-keygen -t rsa生成公鑰後,在master上操作無密碼登陸master/slave1/slave2成功後,直接拷貝給其他主機即可
然後,將證書檔案複製到其他機器的使用者主目錄下
# scp -P 48490 authorized_keys master:/root/.ssh/
# scp -P 48490 authorized_keys slave1:/root/.ssh/
# scp -P 48490 authorized_keys slave2:/root/.ssh/


三、Hadoop叢集安裝配置

 

這裡會將hadoop、hbase、zookeeper的安裝包都解壓到/data/yunva/資料夾下,並重命名
安裝目錄如下:
/data/yunva/hadoop-2.7.3
/data/yunva/hbase-1.2.5
/data/yunva/zookeeper-3.4.6


3.1 修改hadoop配置

 

配置檔案都在/data/yunva/hadoop-2.7.3/etc/hadoop/目錄下

3.1.1 core-site.xml


<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://master:9000</value>
    </property>
</configuration>


3.1.2 hadoop-env.sh
新增JDK路徑,如果不同的伺服器jdk路徑不同需要單獨修改:
export JAVA_HOME=/usr/java/jdk1.8.0_111


3.1.3 hdfs-site.xml
# 建立hadoop的資料和使用者目錄
# mkdir -p /data/yunva/hadoop-2.7.3/hadoop/name
# mkdir -p /data/yunva/hadoop-2.7.3/hadoop/data


<configuration>
   <property>
        <name>dfs.name.dir</name>
        <value>/data/yunva/hadoop-2.7.3/hadoop/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/data/yunva/hadoop-2.7.3/hadoop/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
</configuration>


3.1.4 mapred-site.xml

# mv mapred-site.xml.template mapred-site.xml


<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
    </property>
</configuration>


3.1.5 修改slaves檔案,localhost改為
# cat /data/yunva/hadoop-2.7.3/etc/hadoop/slaves

slave1
slave2

注意:三臺機器上都進行相同的配置,都放在相同的路徑下(如果jdk路徑不同需要單獨修改)
使用scp命令進行從本地到遠端(或遠端到本地)的輕鬆檔案傳輸操作:

scp -r /data/yunva/hadoop-2.7.3/     slave1:/data/yunva
scp -r /data/yunva/hadoop-2.7.3/     slave2:/data/yunva
3.2 啟動hadoop叢集

進入master的/data/yunva/hadoop-2.7.3/目錄,執行以下操作:

# bin/hadoop namenode -format

格式化namenode,第一次啟動服務前執行的操作,以後不需要執行。

然後啟動hadoop:

# sbin/start-all.sh

通過jps命令能看到除jps外有3個程序:

# jps


30613 NameNode
30807 SecondaryNameNode
887 Jps
30972 ResourceManager

hbase-env.sh(java路徑不同需要修改)

master

export JAVA_HOME=/usr/java/jdk1.8.0_111
export HBASE_CLASSPATH=/data/yunva/hadoop-2.7.3/etc/hadoop/
export HBASE_MANAGES_ZK=false
export HBASE_SSH_OPTS="-p 48490"  # 非預設ssh的22埠需要新增此項表示ssh為48490

slave1
export JAVA_HOME=/usr/java/jdk1.8.0_65
export HBASE_CLASSPATH=/data/yunva/hadoop-2.7.3/etc/hadoop/
export HBASE_MANAGES_ZK=false
export HBASE_SSH_OPTS="-p 48490"

slave2

export JAVA_HOME=/usr/java/jdk1.8.0_102
export HBASE_CLASSPATH=/data/yunva/hadoop-2.7.3/etc/hadoop/
export HBASE_MANAGES_ZK=false
export HBASE_SSH_OPTS="-p 48490"


四、ZooKeeper叢集安裝配置

可參考 centos6.5環境下zookeeper-3.4.6叢集環境部署及單機部署詳解http://blog.csdn.net/reblue520/article/details/52279486


五、HBase叢集安裝配置

配置檔案目錄/data/yunva/hbase-1.2.5/conf


5.1 hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_111  # 如果jdk路徑不同需要單獨配置
export HBASE_CLASSPATH=/data/yunva/hadoop-2.7.3/etc/hadoop/
export HBASE_MANAGES_ZK=false
export HBASE_SSH_OPTS="-p 48490"  # ssh埠非預設22需要修改


5.2 hbase-site.xml(保持一致)


<configuration>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://master:9000/hbase</value>
    </property>
    <property>
        <name>hbase.master</name>
        <value>master</value>
    </property>
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2181</value>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>master,slave1,slave2</value>
    </property>
    <property>
        <name>zookeeper.session.timeout</name>
        <value>60000000</value>
    </property>
    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>
</configuration>


5.3 更改 regionservers


在 regionservers 檔案中新增slave列表:


slave1
slave2

5.4 分發並同步安裝包

將整個hbase安裝目錄都拷貝到所有slave伺服器:

$ scp -P 48490 -r /data/yunva/hbase-1.2.5  slave1:/data/yunva/
$ scp -P 48490 -r /data/yunva/hbase-1.2.5  slave2:/data/yunva/

六、啟動叢集


1. 啟動ZooKeeper


/data/yunva/zookeeper-3.4.6/bin/zkServer.sh start


2. 啟動hadoop


/data/yunva/hadoop-2.7.3/sbin/start-all.sh


3. 啟動hbase


/data/yunva/hbase-1.2.5/bin/start-hbase.sh


4. 啟動後,master上程序和slave程序列表


[[email protected] ~]# jps
Jps
SecondaryNameNode   # hadoop程序
NameNode            # hadoop master程序
ResourceManager     # hadoop程序
HMaster             # hbase master程序
ZooKeeperMain      # zookeeper程序


[[email protected] ~]# jps
Jps
ZooKeeperMain       # zookeeper程序
DataNode             # hadoop slave程序
HRegionServer        # hbase slave程序

5. 進入hbase shell進行驗證
# cd /data/yunva/hbase-1.2.5/
[[email protected]_vedio hbase-1.2.5]# bin/hbase shell
2017-04-28 09:51:51,479 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/yunva/hbase-1.2.5/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/yunva/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar 1 00:34:48 CST 2017


hbase(main):001:0> list
TABLE
0 row(s) in 0.2620 seconds

=> []

hbase(main):003:0> create 'scores', 'grade', 'course'
0 row(s) in 1.3300 seconds

=> Hbase::Table - scores
hbase(main):004:0> list
TABLE
scores
1 row(s) in 0.0100 seconds

=> ["scores"]

 

6. 進入zookeeper shell進行驗證


[[email protected]_vedio zookeeper-3.4.6]# bin/zkCli.sh -server

2017-04-28 10:04:33,083 [myid:] - INFO [main:[email protected]] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2017-04-28 10:04:33,088 [myid:] - INFO [main:[email protected]] - Client environment:host.name=test6_vedio
2017-04-28 10:04:33,088 [myid:] - INFO [main:[email protected]] - Client environment:java.version=1.8.0_111
2017-04-28 10:04:33,091 [myid:] - INFO [main:[email protected]] - Client environment:java.vendor=Oracle Corporation
2017-04-28 10:04:33,091 [myid:] - INFO [main:[email protected]] - Client environment:java.home=/usr/java/jdk1.8.0_111/jre
2017-04-28 10:04:33,091 [myid:] - INFO [main:[email protected]] - Client environment:java.class.path=/data/yunva/zookeeper-3.4.6/bin/../build/classes:/data/yunva/zookeeper-3.4.6/bin/../build/lib/*.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/data/yunva/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/data/yunva/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/data/yunva/zookeeper-3.4.6/bin/../conf:
2017-04-28 10:04:33,091 [myid:] - INFO [main:[email protected]] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-04-28 10:04:33,091 [myid:] - INFO [main:[email protected]] - Client environment:java.io.tmpdir=/tmp
2017-04-28 10:04:33,091 [myid:] - INFO [main:[email protected]] - Client environment:java.compiler=<NA>
2017-04-28 10:04:33,092 [myid:] - INFO [main:[email protected]] - Client environment:os.name=Linux
2017-04-28 10:04:33,092 [myid:] - INFO [main:[email protected]] - Client environment:os.arch=amd64
2017-04-28 10:04:33,092 [myid:] - INFO [main:[email protected]] - Client environment:os.version=2.6.32-431.11.25.el6.ucloud.x86_64
2017-04-28 10:04:33,092 [myid:] - INFO [main:[email protected]] - Client environment:user.name=root
2017-04-28 10:04:33,092 [myid:] - INFO [main:[email protected]] - Client environment:user.home=/root
2017-04-28 10:04:33,092 [myid:] - INFO [main:[email protected]] - Client environment:user.dir=/data/yunva/zookeeper-3.4.6
2017-04-28 10:04:33,094 [myid:] - INFO [main:[email protected]] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 [email protected]
Welcome to ZooKeeper!
2017-04-28 10:04:33,128 [myid:] - INFO [main-SendThread(localhost:2181):[email protected]] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2017-04-28 10:04:33,209 [myid:] - INFO [main-SendThread(localhost:2181):[email protected]] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2017-04-28 10:04:33,218 [myid:] - INFO [main-SendThread(localhost:2181):[email protected]] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x35bb23d68ba0003, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /

zookeeper hbase

[zk: localhost:2181(CONNECTED) 0] ls /hbase
[replication, meta-region-server, rs, splitWAL, backup-masters, table-lock, flush-table-proc, region-in-transition, online-snapshot, master, running, recovering-regions, draining, namespace, hbaseid, table]

 

如果訪問預設的http管理埠頁面可以看到叢集的情況
hadoop:
http://IP:8088/cluster/cluster

hbase:
http://IP:16010/master-status

hdfs:
http://IP:50070/dfshealth.html#tab-overview


---------------------
作者:鄭子明
來源:CSDN
原文:https://blog.csdn.net/reblue520/article/details/70888850