1. 程式人生 > >搭建Hadoop叢集的HA高可用架構(超詳細步驟+已驗證)

搭建Hadoop叢集的HA高可用架構(超詳細步驟+已驗證)

一、叢集的規劃

Zookeeper叢集:

192.168.182.12 (bigdata12)
192.168.182.13 (bigdata13)
192.168.182.14 (bigdata14)  

Hadoop叢集:

192.168.182.12 (bigdata12)   NameNode1主節點      ResourceManager1主節點     Journalnode
192.168.182.13 (bigdata13)   NameNode2備用主節點  ResourceManager2備用主節點  Journalnode
192.168.182.14 (bigdata14)   DataNode1      NodeManager1
192.168.182.15 (bigdata15)   DataNode2      NodeManager2

二、準備工作

1、安裝JDK:每臺機器都需要安裝

我這裡使用的是jdk-8u152-linux-x64.tar.gz安裝包

解壓JDK:
tar -zxvf jdk-8u144-linux-x64.tar.gz -C ~/training

2、配置環境變數:

1)配置java環境變數:
vi ~/.bash_profile
export JAVA_HOME=/root/training/jdk1.8.0_144
export PATH=$JAVA_HOME/bin:$PATH
2)生效環境變數:
source ~/.bash_profile
3)驗證是否安裝成功:
java -version

3、配置IP地址與主機名的對映關係 原因:方便SSH呼叫 方便Ping通

vi /etc/hosts

輸入:

 192.168.182.13 bigdata13 
 192.168.182.14 bigdata14
 192.168.182.15 bigdata15

4、配置免密碼登入

1)在每臺機器上產生公鑰和私鑰
ssh-keygen -t rsa

含義:通過ssh協議採用非對稱加密演算法的rsa演算法生成一組金鑰對:公鑰和私鑰

2)在每臺機器上將自己的公鑰複製給其他機器

注:以下四個命令需要在每臺機器上都執行一遍

ssh-copy-id -i .ssh/id_rsa.pub [email protected]
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
ssh-copy-id -i .ssh/id_rsa.pub [email protected]
ssh-copy-id -i .ssh/id_rsa.pub [email protected]

三、安裝Zookeeper叢集(在bigdata12上安裝)

在主節點(bigdata12)上安裝和配置ZooKeeper

我這裡使用的是zookeeper-3.4.10.tar.gz安裝

1、解壓Zookeeper:

tar -zxvf zookeeper-3.4.10.tar.gz -C ~/training

2、配置和生效環境變數:

export ZOOKEEPER_HOME=/root/training/zookeeper-3.4.10
export PATH=$ZOOKEEPER_HOME/bin:$PATH
source ~/.bash_profile

3、修改zoo.cfg配置檔案:

vi /root/training/zookeeper-3.4.10/conf/zoo.cfg
修改:
dataDir=/root/training/zookeeper-3.4.10/tmp
在最後一行新增:
server.1=bigdata12:2888:3888
server.2=bigdata13:2888:3888
server.3=bigdata14:2888:3888

4、修改myid配置檔案

在/root/training/zookeeper-3.4.10/tmp目錄下建立一個myid的空檔案:

mkdir /root/training/zookeeper-3.4.10/tmp/myid

echo 1 > /root/training/zookeeper-3.4.10/tmp/myid

5、將配置好的zookeeper拷貝到其他節點,同時修改各自的myid檔案

scp -r /root/training/zookeeper-3.4.10/ bigdata13:/root/training
scp -r /root/training/zookeeper-3.4.10/ bigdata14:/root/training

進入bigdata13和bigdata14兩臺機器中,找到myid檔案,將其中的1分別修改為2和3:

vi myid

在bigdata13中輸入:2
在bigdata14中輸入:3

四、安裝Hadoop叢集(在bigdata12上安裝)

1、修改hadoop-env.sh

export JAVA_HOME=/root/training/jdk1.8.0_144

2、修改core-site.xml

<configuration>
<!-- 指定hdfs的nameservice為ns1 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1</value>
</property>
<!-- 指定HDFS資料存放路徑,預設存放在linux的/tmp目錄中 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/root/training/hadoop-2.7.3/tmp</value>
</property>
<!-- 指定zookeeper的地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>bigdata12:2181,bigdata13:2181,bigdata14:2181</value>
</property>
</configuration>

3、修改hdfs-site.xml(配置這個nameservice中有幾個namenode)

<configuration> 
<!--指定hdfs的nameservice為ns1,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>

<!-- ns1下面有兩個NameNode,分別是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>

<!-- nn1的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>bigdata12:9000</value>
</property>

<!-- nn1的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>bigdata12:50070</value>
</property>

<!-- nn2的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>bigdata13:9000</value>
</property>

<!-- nn2的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>bigdata13:50070</value>
</property>

<!-- 指定NameNode的日誌在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://bigdata12:8485;bigdata13:8485;/ns1</value>
</property>

<!-- 指定JournalNode在本地磁碟存放資料的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/root/training/hadoop-2.7.3/journal</value>
</property>

<!-- 開啟NameNode失敗自動切換 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>

<!-- 配置失敗自動切換實現方式 -->
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<!-- 配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>

<!-- 使用sshfence隔離機制時需要ssh免登陸 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>

<!-- 配置sshfence隔離機制超時時間 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>

4、修改mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

配置Yarn的HA

5、修改yarn-site.xml
<configuration>
<!-- 開啟RM高可靠 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>

<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>

<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>

<!-- 分別指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>bigdata12</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>bigdata13</value>
</property>

<!-- 指定zk叢集地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>bigdata12:2181,bigdata13:2181,bigdata14:2181</value>
</property>

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

6、修改slaves 從節點的地址

bigdata14
bigdata15

7、將配置好的hadoop拷貝到其他節點

scp -r /root/training/hadoop-2.7.3/ [email protected]:/root/training/
scp -r /root/training/hadoop-2.7.3/ [email protected]:/root/training/
scp -r /root/training/hadoop-2.7.3/ [email protected]:/root/training/

五、啟動Zookeeper叢集

在每一臺機器上輸入:

zkServer.sh start

六、啟動journalnode

在bigdata12和bigdata13兩臺節點上啟動journalnode節點:

hadoop-daemon.sh start journalnode

七、格式化HDFS和Zookeeper(在bigdata12上執行)

格式化HDFS:

hdfs namenode -format

將/root/training/hadoop-2.7.3/tmp拷貝到bigdata13的/root/training/hadoop-2.7.3/tmp下

scp -r dfs/ root@bigdata13:/root/training/hadoop-2.7.3/tmp

格式化zookeeper:

hdfs zkfc -formatZK

日誌:INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/ns1 in ZK.

以上日誌表明在Zookeeper的檔案系統中建立了/hadoop-ha/ns1的子目錄用於儲存Namenode的結構資訊

八、啟動Hadoop叢集(在bigdata12上執行)

啟動Hadoop叢集的命令:

start-all.sh

日誌:
Starting namenodes on [bigdata12 bigdata13]
bigdata12: starting namenode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-namenode-hadoop113.out
bigdata13: starting namenode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-namenode-hadoop112.out
bigdata14: starting datanode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop115.out
bigdata15: starting datanode, logging to /root/training/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop114.out
bigdata13: starting zkfc, logging to /root/training/hadoop-2.7.3/logs/hadoop-root-zkfc-       bigdata13.out
bigdata12: starting zkfc, logging to /root/training/hadoop-2.7.3/logs/hadoop-root-zkfc-bigdata12.out

在bigdata13上手動啟動ResourceManager作為Yarn的備用主節點:

yarn-daemon.sh start resourcemanager

至此,Hadoop叢集的HA架構就已經搭建成功。