一、安裝準備

  1. 建立hadoop賬號
  2. 更改ip
  3. 安裝Java 更改/etc/profile 配置環境變數

    export $JAVA_HOME=/usr/java/jdk1.7.0_71

  4. 修改host檔案域名

    172.16.133.149 hadoop101
    172.16.133.150 hadoop102
    172.16.133.151 hadoop103 
    
  5. 安裝ssh 配置無密碼登入
  6. 解壓hadoop

    /hadoop/hadoop-2.6.2

二、修改conf下面的配置檔案

依次修改hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml和slaves檔案。

1.hadoop-env.sh

`#新增JAVA_HOME:`
`export JAVA_HOME=/usr/java/jdk1.7.0_71`

2.core-site.xml

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/hadoop/hadoop-2.6.2/hdfs/tmp</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://hadoop101:9000</value>
    </property>
</configuration>

3.hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>

    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///hadoop/hadoop-2.6.2/hdfs/name</value>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///hadoop/hadoop-2.6.2/hdfs/data</value>
    </property>

    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop101:9001</value>
    </property>

    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>

</configuration>

4.mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

5.yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop101</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

</configuration>

6.slaves

hadoop102
hadoop103

7.最後,將整個hadoop-2.6.2資料夾及其子資料夾使用scp複製到兩臺Slave(hadoop102、hadoop103)的相同目錄中:

scp -r \hadoop\hadoop-2.6.2\ [email protected]:\hadoop\

scp -r \hadoop\hadoop-2.6.2\ [email protected]:\hadoop\

三、啟動執行Hadoop(進入hadoop資料夾下)

  1. 格式化NameNode

    dfs namenode -format

  2. 啟動Namenode、SecondaryNameNode和DataNode

    [[email protected]]$ start-dfs.sh

  3. 啟動ResourceManager和NodeManager

    [[email protected]]$ start-yarn.sh

  4. 最終執行結果
    hadoop101

    hadoop102

    hadoop103

四、測試Hadoop

  1. 測試HDFS

    瀏覽器輸入http://<-NameNode主機名或IP->:50070

    HDFS

  2. 測試ResourceManager

    瀏覽器輸入http://<-ResourceManager所在主機名或IP->:8088

    ResourceManager

.