1. 程式人生 > >Hadoop3.0叢集安裝(包含HDFS HA配置)

Hadoop3.0叢集安裝(包含HDFS HA配置)

hadoop3.0出來了,想嘗試一下新版本的特性及mapreduce效能提升,以下以6臺機器作為hadoop叢集,機器主機名為:hadoop1、hadoop2、hadoop3、hadoop4、hadoop5、hadoop6,其中hadoop1-3作為namenode節點,hadoop4-6作為datanode節點。

一、前提條件

1、6臺機器上都安裝了jdk,並配置好了jdk環境變數(建議安裝jdk1.8)。

2、叢集中安裝好了zookeeper叢集,HDFS HA需要。

我這裡假設zookeeper安裝在:hadoop1、hadoop2、hadoop3這三臺機器上。

3、叢集中6臺機器都相互配置了ssh免密碼登入

二、hadoop3.0安裝步驟

1、下載hadoop3.0

2、解壓檔案

3、修改hadoop-env.sh配置檔案,配置jdk環境變數

export JAVA_HOME=/opt/jdk1.8.0_121

4、修改hdfs-site.xml配置檔案

<configuration>
  <!-- Hadoop 3.0 HA Configuration -->
  <property>
     <name>dfs.nameservices</name>
     <value>hdfscluster</value>
  </property>
  <property>
     <name>dfs.ha.namenodes.hdfscluster</name>
     <value>nn1,nn2,nn3</value>
  </property>
  <property>
     <name>dfs.namenode.rpc-address.hdfscluster.nn1</name>
     <value>hadoop1:9820</value>
  </property>
  <property>
     <name>dfs.namenode.rpc-address.hdfscluster.nn2</name>
     <value>hadoop2:9820</value>
  </property>
  <property>
     <name>dfs.namenode.rpc-address.hdfscluster.nn3</name>
     <value>hadoop3:9820</value>
  </property> 
  <property>
     <name>dfs.namenode.http-address.hdfscluster.nn1</name>
     <value>hadoop1:9870</value>
  </property>
  <property>
     <name>dfs.namenode.http-address.hdfscluster.nn2</name>
     <value>hadoop2:9870</value>
  </property>
  <property>
     <name>dfs.namenode.http-address.hdfscluster.nn3</name>
     <value>hadoop3:9870</value>
  </property> 
  <property>
     <name>dfs.namenode.shared.edits.dir</name>
     <value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/hdfscluster</value>
  </property>
  <property>
     <name>dfs.client.failover.proxy.provider.hdfscluster</name>
     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
  <property>
     <name>dfs.ha.fencing.methods</name>
     <value>sshfence</value>
  </property>
  <property>
     <name>dfs.ha.fencing.ssh.private-key-files</name>
     <value>/home/hadoop/.ssh/id_rsa</value>
  </property>
  <property>
     <name>dfs.journalnode.edits.dir</name>
     <value>/opt/hadoop-3.0.0/datas/journal</value>
  </property>
  <property>
     <name>dfs.ha.automatic-failover.enabled</name>
     <value>true</value>
  </property>
  <property>
     <name>dfs.replication</name>
     <value>3</value>
  </property>
  <property>
     <name>dfs.permissions.enabled</name>
     <value>false</value>
  </property>
  <property>
     <name>dfs.namenode.name.dir</name>  
     <value>/opt/hadoop-3.0.0/datas/namenode</value>  
  </property>  
  <property>  
     <name>dfs.datanode.data.dir</name>  
     <value>/opt/hadoop-3.0.0/datas/datanode</value>  
   </property>  
<property>
    <name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
	<value>false</value>
</property>
<property>
    <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
    <value>DEFAULT</value>
</property>
<property>
    <name>dfs.support.append</name>
    <value>true</value>	
</property>
</configuration>

5、修改core-site.xml配置檔案

<configuration>
 <property>  
     <name>fs.defaultFS</name>   
     <value>hdfs://hdfscluster</value>  
  </property>       
  <property>  
     <name>hadoop.tmp.dir</name>  
     <value>/opt/hadoop-3.0.0/tmp</value>
  </property>
  <property>
     <name>fs.trash.interval</name>
     <value>1440</value>
  </property>
  <property>  
     <name>ha.zookeeper.quorum</name>
     <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>  
  </property>
</configuration>

6、修改yarn-site.xml配置檔案

<configuration>
<!-- Site specific YARN configuration properties -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
      <property>  
        <name>yarn.resourcemanager.address</name>  
        <value>hadoop1:8032</value>  
      </property>  
      <property>  
        <name>yarn.resourcemanager.scheduler.address</name>  
        <value>hadoop1:8030</value>  
      </property>  
      <property>  
        <name>yarn.resourcemanager.resource-tracker.address</name>  
        <value>hadoop1:8031</value>  
      </property>  
      <property>  
        <name>yarn.nodemanager.vmem-check-enabled</name>  
        <value>false</value>  
      </property>  
      <property>  
        <name>yarn.nodemanager.pmem-check-enabled</name>  
        <value>false</value>  
      </property>  
</configuration>

7、修改mapred-site.xml配置檔案

<configuration>
   <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
       <name>mapreduce.jobhistory.address</name>
       <value>hadoop1:10020</value>
    </property>
    <property>
       <name>mapreduce.jobhistory.webapp.address</name>
       <value>hadoop1:19888</value>
    </property>
   <property>
    <name>mapreduce.application.classpath</name>
    <value>
        /opt/hadoop-3.0.0/etc/hadoop,
        /opt/hadoop-3.0.0/share/hadoop/common/*,
        /opt/hadoop-3.0.0/share/hadoop/common/lib/*,
        /opt/hadoop-3.0.0/share/hadoop/hdfs/*,
        /opt/hadoop-3.0.0/share/hadoop/hdfs/lib/*,
        /opt/hadoop-3.0.0/share/hadoop/mapreduce/*,
        /opt/hadoop-3.0.0/share/hadoop/mapreduce/lib/*,
        /opt/hadoop-3.0.0/share/hadoop/yarn/*,
        /opt/hadoop-3.0.0/share/hadoop/yarn/lib/*
    </value>
</property>
</configuration>

8、修改workers配置檔案

hadoop4
hadoop5
hadoop6

9、建立相應目錄

mkdir -p /opt/hadoop-3.0.0/datas/journal

mkdir -p /opt/hadoop-3.0.0/datas/namenode

mkdir -p /opt/hadoop-3.0.0/datas/datanode

10、將配置好的hadoop拷貝到其它機器上(hadoop2-6)

scp -r /opt/hadoop-3.0.0 hadoop2:/opt

其它機器執行同樣操作

三、啟動hadoop叢集

注意啟動叢集之前需要做如下步驟

1、格式化zkfc

hdfs zkfc -formatZK

2、啟動journalnode

hadoop-daemon.sh start journalnode

3、在hadoop1上格式化namenode

命令:/opt/hadoop-3.0.0/bin/hadoop namenode -format

4、將hadoop1上格式化後的namenode元資料資訊複製到hadoop2、hadoop3這兩臺namenode上

scp -r /opt/hadoop-3.0.0/datas/namenode/* hadoop2:/op/opt/hadoop-3.0.0/datas/namenode/

scp -r /opt/hadoop-3.0.0/datas/namenode/* hadoop3:/op/opt/hadoop-3.0.0/datas/namenode/

5、執行以上兩步後現在可以啟動hadoop叢集了

在啟動hdfs叢集之前,先關閉之前已經啟動的journalnode

啟動hdfs叢集:/opt/hadoop-3.0.0/sbin/start-dfs.sh

6、驗證hdfs是否啟動成功

分別訪問:

都能訪問說明安裝成功了。需要注意三臺namenode的狀態。

7、注意事項

hadoop3.0的許多埠已經發生了變化,詳細如下:

Namenode 埠: 

50470 --> 9871

50070 --> 9870

8020 --> 9820

Secondary NN 埠:

50091 --> 9869

50090 --> 9868

Datanode 埠: 

50020 --> 9867

50010 --> 9866

50475 --> 9865

50075 --> 9864