一. 環境介紹

三臺主機,主機名和ip分別為:
ubuntu1  10.3.19.171
ubuntu2  10.3.19.172
ubuntu3  10.3.19.173
三臺主機的登入使用者名稱是bigdata,home目錄是/home/bigdata
現在三臺主機上部署hadoop叢集, ubuntu1作為namenode, ubuntu1 ubuntu2 ubuntu3作為datanode。

二. 安裝部署

1. 配置各主機名和hosts, 安裝JAVA

修改各主機名的配置, 例如ubuntu1的主機名配置為:
[email protected]:~$ cat /etc/hostname 
ubuntu1
ubuntu2 和 ubuntu3分別配置為ubuntu2 和 ubuntu3
在各主機上配置hosts列表, 三臺主機的配置相同,如下紅色部分:
[email protected]:~$ cat /etc/hosts
127.0.0.1       localhost
127.0.1.1       ubuntu

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

10.3.19.171 ubuntu1
10.3.19.172 ubuntu2
10.3.19.173 ubuntu3

安裝配置java. 三臺主機都需要安裝java, ubuntu1下java安裝路徑如下, 其它兩臺主機,也是安裝到~/usr/jdk1.8.0_25目錄下
[email protected]:~/usr/jdk1.8.0_25$ pwd
/home/bigdata/usr/jdk1.8.0_25
配置JAVA相關環境變數
export JAVA_HOME=/home/bigdata/usr/jdk1.8.0_25
export JRE_HOME=$JAVA_HOME/jre 
export CLASSPATH=.:$JAVA_HOME/lib/jt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export PATH=$JAVA_HOME/bin:$PATH 

2. 配置三臺主機ssh免密碼登入

step1: 分別在ubuntu1 ubuntu2 ubuntu3三臺主機上,執行如下命令生成公鑰/私鑰對
[email protected]:~$ ssh-keygen -t rsa -P ''
按回車鍵,它在/home/bigdata下生成.ssh目錄,.ssh下有id_rsa和id_rsa.pub。-P表示密碼,-P '' 就表示空密碼。

step2: 拷貝公鑰到各主機,使得各主機可以ssh免密碼登入

檢視.ssh/authorized_keys檔案是否存在,如果不存在則建立

[email protected]:~$ ls .ssh/authorized_keys 
.ssh/authorized_keys
修改.ssh/authorized_keys檔案許可權為600
[email protected]:~$ chmod 600 .ssh/authorized_keys
將ubuntu1 ubuntu2 ubuntu3三臺主機 /home/bigdata/.ssh/id_rsa.pub 中的內容拷貝到ubuntu1主機的/home/bigdata/.ssh/authorized_keys檔案中。拷貝後/home/bigdata/.ssh/authorized_keys檔案的內容如下:
[email protected]:~$ cat .ssh/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCrI0nqIW4P7mPbKytk/wo0d1utVQCIgDGQjUZDgVv1Ll9y6PoRtffTqDQ5cHLHnqapsdO/RAfDFUBPaDAHHwZeQ/sKP/c9GWbr9PxjH2dzzVcB6VOll3T0vTLAxMsIevtBj/fN5HRdZTasQeLtibY5CQvGF+owWCWtWp7cEdewL2fgPCFIVijFLqGLtAvLJrpcbnwoY0WgIivviK9QD2Ymm/kC5pAK/5/QljYldyOEiPbTeRLE94b4G7XWPDhIv/1SKLvnwQoy/Zs0HokG/jzW3VXSLJ9aecg8JfTp3q2Ngcm3hm/chfWvxfCRC5DnmTzlKg+HxsWp7M3ku3saR2Kv [email protected]
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7E/gMzpHo4YXEmIBMau+H1Jw01UVEvUSWWy1zz6v0xXQbSQt2fKm1Iwm9XoijLWrv0PWZp44XdPNAP0o+2pK2y8G3t7qMMbAIu9LSe7No51Npz450HMbKlXtZ2ED3Zd/09SP+ekpd4z8IiMeZGUQNCsdRe0jgIbP8QtEkg7MO7WrI9d54dBMzjWPqrxYcSfP3yf65WXGwTcxbmbaFpuFSMvx2c+OsA1u1wemNk0r1/f1Q5Qu4GavOX0ZQojUjz1z2CJe9Hm+hNtpgrIrmnzPk3+4qw/8udUkDGYuZD0G9J2xPmD1qwu5oz29a9b/2eMqStQu7NWoDfCCpoa6ZrHuv [email protected]
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMRgOi/sWQHXRgL24VqDgEncOxmoV9IlmjVhC9Mhbx1vpRWfLakGzxQdmyN6X7xJxZjc3O6CcJbVSZTBnAd45PpX5EhQ/QBa+ycz9Cfl44bVyEaeb2mAtcTbILlepNgvdT+MgVxDkkL+7bL9XMwkxptmQozTY7RTRd3MHiXOk+AoDMt9iJrpGyhjSpR5WEL7jG9TAakOlnBSyTPPnjPV+SerN3DEG/KSg3vAED9Pimzka7MM6syjMVypVt5AZhDmc5bTXPwq6+WTupspI6MdXkeKn6tTCZGhE1Qwvf8nP0Thope6fvrbVWTTOf4RGW8/f9zXyWy2PC9Pp2h5AIihZf [email protected]

嘗試在ubuntu1上無密碼登入ubuntu2, 第一次可能需要輸入密碼,後續登入不需要。

[email protected]:~$ ssh ubuntu2
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-21-generic x86_64)

 * Documentation:  https://help.ubuntu.com/
Last login: Tue May  2 19:56:34 2017 from 10.3.19.171
[email protected]:~$ 

3. 安裝hadoop到一臺主機ubuntu1上

我的hadoop部署在目錄/home/bigdata/run目錄下,在ubuntu1上部署hadoop先。操作部署如下:
step1. 解壓hadoop-2.7.3.tar.gz,並將其拷貝到run目錄下
[email protected]:~$ tar -zxvf hadoop-2.7.3.tar.gz
[email protected]:~$ mv hadoop-2.7.3 /home/bigdata/run/
[email protected]:~$ ln -s /home/bigdata/run/hadoop-2.7.3 /home/bigdata/run/hadoop
step2: 配置core-site.xml
[email protected]:~$ cd /home/bigdata/run/hadoop/etc/hadoop
[email protected]:~/run/hadoop/etc/hadoop$ 
[email protected]:~/run/hadoop/etc/hadoop$ cat core-site.xml 
......
<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://ubuntu1:9000</value>
        </property>
</configuration>

step3: 配置hdfs-site.xml
[email protected]:~/run/hadoop/etc/hadoop$ cat hdfs-site.xml 
.......
<configuration>

        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>

        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/home/bigdata/run/hadoop/tmp/name</value>
        </property>

        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/home/bigdata/run/hadoop/tmp/data</value>
        </property>

        <property>
                <name>dfs.http.address</name>
                <value>ubuntu1:50070</value>
        </property>

        <property>
                <name>dfs.datanode.http.address</name>
                <value>ubuntu1:50075</value>
        </property>

</configuration>

step4: 配置mapred-site.xml
[email protected]:~/run/hadoop/etc/hadoop$ cat mapred-site.xml
......
<configuration>
        <property>  
                <name>mapreduce.framework.name</name>  
                <value>yarn</value>
        </property>  

</configuration>

step5:配置yarn-site.xml
[email protected]:~/run/hadoop/etc/hadoop$ cat yarn-site.xml 
......
<configuration>
        <property>  
                <name>yarn.resourcemanager.hostname</name>  
                <value>ubuntu1</value>  
        </property>  

        <property>  
                <name>yarn.nodemanager.hostname</name>  
                <value>ubuntu1</value> 
        </property>  

        <property>  
                <name>yarn.nodemanager.aux-services</name>  
                <value>mapreduce_shuffle</value>  
        </property>
</configuration>

step6: 配置hadoop-env.sh,紅色部署,配置JAVA_HOME,即jdk的安裝路徑
[email protected]:~/run/hadoop/etc/hadoop$ cat hadoop-env.sh 
......
# The java implementation to use.
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/home/bigdata/usr/jdk1.8.0_25
......

step7: 建立namenode和datanode資料所在目錄
mkdir /home/bigdata/run/hadoop/tmp/name
mkdir /home/bigdata/run/hadoop/tmp/data
這兩路徑已配置到了hdfs-site.xml配置檔案中。

這樣在ubuntu1上部署hadoop已經完成了。接下來,將hadoop部署到ubuntu2和ubuntu3兩臺主機上。

4. 安裝hadoop到ubuntu2和ubuntu3主機上

為了方便, 我們直接將ubuntu1上的/home/bigdata/run/hadoop-2.7.3目錄打包,拷貝到ubuntu1和ubuntu2主機上。注意, ubuntu2和ubuntu3主機的目錄結構,要同ubuntu1主機一樣。我們同樣將hadoop部署到/home/bigdata/run目錄下
[email protected]:~/run$ tar -zcvf hadoop-2.7.3.tar.gz hadoop-2.7.3/
scp hadoop-2.7.3.tar.gz ubuntu2:/run/
scp hadoop-2.7.3.tar.gz ubuntu3:/run/
在ubuntu2 ubuntu3主機上解壓,並建立軟連結,發下是ubuntu2上的操作,ubuntu3類似。
[email protected]:~/run$ tar -zxvf hadoop-2.7.3.tar.gz
[email protected]:~$ ln -s /home/bigdata/run/hadoop-2.7.3 /home/bigdata/run/hadoop
接下來修改配置檔案。因ubuntu2和ubuntu3上修改配置檔案完全一樣,以下只講述ubuntu2上的修改過程。
step1. 修改hdfs-site.xml, 註釋掉紅色部署。
[email protected]:~/run/hadoop/etc/hadoop$ cat hdfs-site.xml 
......
<configuration>

        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
<!--
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/home/bigdata/run/hadoop/tmp/name</value>
        </property>
-->
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/home/bigdata/run/hadoop/tmp/data</value>
        </property>

<!--
        <property>
                <name>dfs.http.address</name>
                <value>ubuntu1:50070</value>
        </property>

        <property>
                <name>dfs.datanode.http.address</name>
                <value>ubuntu1:50075</value>
        </property>
-->

</configuration>


step2: 修改mapred-site.xml, 註釋掉紅色部分
[email protected]:~/run/hadoop/etc/hadoop$ cat mapred-site.xml
......
<configuration>
        <property>  
                <name>mapreduce.framework.name</name>  
                <value>yarn</value>
        </property>  
</configuration>


5. 啟動hadoop叢集

step0: 格式化namenode
[email protected]:~/run/hadoop/bin$ cd /home/bigdata/run/hadoop/bin
[email protected]:~/run/hadoop/bin$ ./hadoop namenode -format
step1. 啟動
[email protected]:~$ cd /home/bigdata/run/hadoop/sbin
[email protected]:~/run/hadoop/sbin$ ./start-all.sh
step2. 檢查程序是否已啟動
[email protected]:~/run/hadoop/sbin$ jps -l
1827 org.apache.hadoop.yarn.server.nodemanager.NodeManager
1494 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
1225 org.apache.hadoop.hdfs.server.namenode.NameNode
1707 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
2110 sun.tools.jps.Jps
1343 org.apache.hadoop.hdfs.server.datanode.DataNode

[email protected]:~/run/hadoop/etc/hadoop$ jps -l
983 org.apache.hadoop.hdfs.server.datanode.DataNode
1166 sun.tools.jps.Jps
step3. 檢視頁面, 訪問 http://ubuntu1:50070



6.  測試hadoop

為了測試方便,設定環境變數如下:
export JAVA_HOME=/home/bigdata/usr/jdk1.8.0_25
export JRE_HOME=$JAVA_HOME/jre 
export CLASSPATH=.:$JAVA_HOME/lib/jt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export PATH=$JAVA_HOME/bin:$PATH 

export HADOOP_PREFIX=/home/bigdata/run/hadoop
export HADOOP_HOME=$HADOOP_PREFIX  
export HADOOP_COMMON_HOME=$HADOOP_PREFIX  
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop  
export HADOOP_HDFS_HOME=$HADOOP_PREFIX  
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX  
export HADOOP_YARN_HOME=$HADOOP_PREFIX  
export PATH=$PATH:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin

測試步驟如下:

a. 本地資料夾/home/bigdata/下建立檔案test.txt

$ cat test.txt 

hello world
b. hdfs上建立資料夾test:hdfs dfs -mkdir /test
c. 拷貝test.txt到test資料夾下:hdfs dfs -copyFromLocal /home/bigdata/test.txt /test/test.txt
d. 檢視hdfs上的test.txt:hdfs dfs -cat /test/test.txt
e. 執行工具類wordcount,將結果輸出到output:
hadoop jar /home/bigdata/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /test/test.txt /output
f. 在output下檢視輸出結果檔案:hdfs dfs -ls /output
g. 檢視統計結果 hdfs dfs -cat /output/part-r-00000