1. 程式人生 > >hadoop 2.7.2 + zookeeper 高可用叢集部署

hadoop 2.7.2 + zookeeper 高可用叢集部署

一.環境說明

虛擬機器:vmware 11

作業系統:Ubuntu 16.04

Hadoop版本:2.7.2

Zookeeper版本:3.4.9

二.節點部署說明

三.Hosts增加配置

sudo gedit /etc/hosts

wxzz-pc、wxzz-pc0、wxzz-pc1、wxzz-pc2均配置如下:

127.0.0.1 localhost
192.168.72.132 wxzz-pc
192.168.72.138 wxzz-pc0
192.168.72.135 wxzz-pc1
192.168.72.136 wxzz-pc2

 四.zookeeper

上配置

Zoo.cfg配置檔案內容如下:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper-3.4.9/tmp/dataDir
dataLogDir=/opt/zookeeper-3.4.9/tmp/logs/
clientPort=2181
server.1=wxzz-pc:2182:2183
server.2=wxzz-pc0:2182:2183
server.3=wxzz-pc1:2182:2183

 在/opt/zookeeper-3.4.9/tmp/dataDir下新建”myid”檔案,wxzz-pc、wxzz-pc0、wxzz-pc1三臺虛擬機器中myid檔案分別對應的內容為:1、2、3,也就是server.X=wxzz-pc:2182:2183,對應X的數值。

五.Hadoop配置

1.core-site.xml 配置

<configuration>
        <property>
           <name>fs.defaultFS</name>
           <value>hdfs://myhadoop:8020</value>
        </property>
        <property>
           <name>hadoop.tmp.dir</name>
           <value>/opt/hadoop-2.7.2/tmp/hadoop-${user.name}</value>
        </property>
        <property>
           <name>ha.zookeeper.quorum</name>
           <value>wxzz-pc:2181,wxzz-pc0:2181,wxzz-pc1:2181</value>
        </property>
</configuration>

2. hdfs-site.xml 配置

<configuration>
     <property>
        <name>dfs.replication</name>
        <value>2</value>
     </property>
     <property> 
         <name>dfs.block.size</name> 
         <value>10485760</value> 
     </property>
     <property>
       <name>hadoop.tmp.dir</name>
       <value>/opt/hadoop-2.7.2/tmp/hadoop-${user.name}</value>
     </property>
     <property>
       <name>dfs.namenode.name.dir</name>
       <value>${hadoop.tmp.dir}/dfs/name</value>
     </property>
    <property>
       <name>dfs.datanode.data.dir</name>
       <value>${hadoop.tmp.dir}/dfs/data</value>
     </property>
     <property> 
         <name>dfs.permissions</name> 
         <value>false</value> 
     </property> 
     <property> 
        <name>dfs.permissions.enabled</name> 
        <value>false</value> 
     </property> 
     <property>   
       <name>dfs.webhdfs.enabled</name>   
       <value>true</value>   
    </property>
     <property>
       <name>dfs.nameservices</name>
       <value>myhadoop</value>
     </property>
     <property>
       <name>dfs.ha.namenodes.myhadoop</name>
       <value>nn1,nn2</value>
     </property>
     <property>
       <name>dfs.namenode.rpc-address.myhadoop.nn1</name>
       <value>wxzz-pc:8020</value>
     </property>
     <property>
       <name>dfs.namenode.http-address.myhadoop.nn1</name>
       <value>wxzz-pc:50070</value>
     </property>
      <property>
       <name>dfs.namenode.rpc-address.myhadoop.nn2</name>
       <value>wxzz-pc0:8020</value>
     </property>
    <property>
       <name>dfs.namenode.http-address.myhadoop.nn2</name>
       <value>wxzz-pc0:50070</value>
     </property>
    <property>
        <name>dfs.namenode.servicerpc-address.myhadoop.nn1</name>
        <value>wxzz-pc:53310</value>
     </property>
     <property>
        <name>dfs.namenode.servicerpc-address.cluster1.nn2</name>
        <value>wxzz-pc0:53310</value>
     </property>
     <property>
        <name>dfs.ha.automatic-failover.enabled.cluster1</name>
        <value>true</value>
     </property>
     <property>
            <name>dfs.namenode.shared.edits.dir</name>
             <value>qjournal://wxzz-pc:8485;wxzz-pc0:8485;wxzz-pc1:8485/myhadoop</value>
     </property>
    <property>
        <name>dfs.client.failover.proxy.provider.myhadoop</name>  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
     </property>
     <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/hadoop-2.7.2/journal</value>
     </property>
     <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
     </property>
     <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/opt/hadoop-2.7.2/.ssh/id_rsa</value>
     </property>
    <property>
       <name>dfs.ha.fencing.ssh.connect-timeout</name>
       <value>1000</value>
     </property>
     <property>
       <name>dfs.namenode.handler.count</name>
       <value>10</value>
     </property>
    <property>
       <name>dfs.ha.automatic-failover.enabled.myhadoop</name>
       <value>true</value>
     </property>
</configuration>

3. mapred-site.xml 配置

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
      <name>mapreduce.jobhistory.address</name>
      <value>0.0.0.0:10020</value>
    </property>
    <property>
      <name>mapreduce.jobhistory.webapp.address</name>
      <value>0.0.0.0:19888</value>
    </property>
</configuration>

4.yarn-site.xml 配置

<configuration>
        <property>
           <name>yarn.resourcemanager.ha.enabled</name>
           <value>true</value>
        </property>
        <property>
           <name>yarn.resourcemanager.cluster-id</name>
           <value>rm-id</value>
        </property>
        <property>
           <name>yarn.resourcemanager.ha.rm-ids</name>
           <value>rm1,rm2</value>
        </property>
        <property>
           <name>yarn.resourcemanager.hostname.rm1</name>
           <value>wxzz-pc</value>
        </property>
        <property>
           <name>yarn.resourcemanager.hostname.rm2</name>
           <value>wxzz-pc0</value>
        </property>
        <property>
           <name>yarn.resourcemanager.zk-address</name>
           <value>wxzz-pc:2181,wxzz-pc0:2181,wxzz-pc1:2181</value>
        </property>
        <property>
           <name>yarn.nodemanager.aux-services</name>
           <value>mapreduce_shuffle</value>
        </property>
</configuration>

六.服務啟動

1.在各個Journal Node節點上,輸入以下命令啟動Journal Node

         sbin/hadoop-daemon.sh start journalnode

2.在[nn1]上,進行格式化,並啟動

         bin/hdfs namenode -format

         sbin/hadoop-daemon.sh start namenode

3.在[nn2]上,同步[nn1]的元資料資訊,並啟動

         bin/hdfs namenode -bootstrapStandby

         sbin/hadoop-daemon.sh start namenode

   經過以上3步,[nn1]和[nn2]均處在standby狀態

4.[nn1]節點上,將其轉換為active狀態

         bin/hdfs haadmin –transitionToActive --forcemanual nn1

5.在[nn1]上,啟動所有datanode

         sbin/hadoop-daemons.sh start datanode

6.在[nn1]上,啟動yarn

         sbin/start-yarn.sh

如果要關閉叢集,在[nn1]上輸入sbin/stop-all.sh即可。以後每次啟動的時候,需要按照上面的步驟啟動,不過不需要執行2 的格式化操作。

七.執行效果

管理介面:

 

命令列效果:

 

物聯網&整合技術(.NET) QQ群54256083