1. 程式人生 > >從零開始hadoop分散式環境搭建

從零開始hadoop分散式環境搭建

1. Linux虛機換機環境安裝

1.1 linux環境安裝

1.建議選擇虛擬機器:VirtualBox
2.Linux版本:Ubuntu
3.安裝時選擇動態擴充套件磁碟,最大磁碟容量50G(最大磁碟容量太小,hadoop使用過程中容易出現意想不到的的問題)
4.網路選擇橋接網絡卡(不要選擇NAT,不然路由器不會為虛擬機器分配獨立的IP地址)
5.安裝增強功能開啟雙向開啟共享剪下板
6.安裝使用者使用同樣的使用者名稱:如hadoop,在建立分散式環境時需要保證通用的使用者名稱
7.設定hadoop主機名為:hadoop-master hadoop-slave01 hadoop-slave02 (如果忘記設定可以看1.2修改主機名)
ps:具體安裝過程不再贅述
安裝成功後:ifconfig檢視機器ip
機器1的ip
hadoop-ip01


機器2的ip
hadoop-ip02
機器3的ip
hadoop-ip03

1.2 linux主機名修改

  1. sudo apt-get install vim
  2. sudo vim /etc/hostname
  3. 修改為自己想要的域名如:hadoop-master,並儲存
  4. reboot(重啟)
  5. 確認主機名修改成功:[email protected] 變為[email protected]

2 ssh和polysh工具安裝

ssh和polysh的安裝的目的是通過自己的機器去訪問虛擬機器如:本人macbook去訪問virtualbox中的hadoop-master hadoop-slave01 hadoop-slave02 3臺虛擬機器。
其中ssh需要在四臺機器都安裝,polysh則只需要在自己本機安裝,從而通過一個shell的形式訪問3臺機器

2.1 ssh安裝和使用

  1. sudo apt-get install ssh (各個虛擬機器都要先安裝,下面的步驟都可以同ssh遠端登入機器操作)
  2. 在本地機器建立虛擬機器host和ip對映關係:/etc/hosts 輸入如下內容: 192.168.31.88 hadoop-master
    192.168.31.234 hadoop-slave01
    192.168.31.186 hadoop-slave02
  3. ssh [email protected] (輸入虛擬機器密碼)
  4. ssh-keygen(生成免密登入的幾個檔案,全部回車即可)
  5. 切換目標cd .ssh 顯示檔案ls (id_rsa id_rsa.pub 檔案具體意義自己百度)
  6. 如果要免密登入到一臺機器,需要把自己機器的id_rsa.pub檔案匯入到要登入機器.ssh/authorized_keys檔案中
  7. 到虛擬機器上執行命令:echo “ssh-rsa xxxxxxxx” >> .ssh/authorized_keys (”ssh xxx“為本地機器的id_rsa.pub的內容)
  8. exit關閉虛擬機器shell
  9. ssh [email protected] (不再需要密碼)
  10. 最好是將所有虛擬機器的id_rsa.pub也匯入到本地機器的authorized_keys中使得互相傳檔案時不用輸密碼
  11. 現在可以無密碼的登入三臺機器ssh [email protected] ssh [email protected] ssh [email protected]

2.2 polysh安裝

polysh是批量登入多臺機器的工具,如上面如果你要傳一個hadoop的安裝檔案包到虛擬機器上,你需要建立三個ssh連結,並在每個ssh的shell中上傳同樣的檔案,而polysh的作用就是你可以在polysh的shell中操作三臺機器,如上傳檔案,只需要一次上傳,就在三臺虛擬機器中都上傳了檔案。

polysh指令碼安裝

wget http://guichaz.free.fr/polysh/files/polysh-0.4.tar.gz
tar -zxvf polysh-0.4.tar.gz
cd polysh-0.4
sudo python setup.py install
polysh -help 如果有提示表示安裝成功
# 將polysh命令加入path中
echo "export PATH=~/polysh/bin:$PATH" >> ~/.bash_profile
source ~/.bash_profile

polysh使用

echo "alias hadoop-login=\"polysh '[email protected]' '[email protected]op-slave<01-02>' \" ">> ~/.bash_profile
source ~/.bash_profile
#以後就可通過hadoop-login來登入三臺機器
hadoop-login

列如:hadoop主從機器的host和ip對映

sudo sh -c "echo \"192.168.31.88 hadoop-master\n192.168.31.234 hadoop-slave01\n192.168.31.186 hadoop-slave02\n\" >> /etc/hosts"
#這裡會要求輸入密碼(輸入hadoop)
cat /etc/hosts
#可以看到如下內容:(:前面是機器hostname,後面是內容)
hadoop@hadoop-slave01 : 192.168.31.88 hadoop-master
hadoop@hadoop-slave01 : 192.168.31.234 hadoop-slave01
hadoop@hadoop-slave01 : 192.168.31.186 hadoop-slave02
hadoop@hadoop-master  : 192.168.31.88 hadoop-master
hadoop@hadoop-master  : 192.168.31.234 hadoop-slave01
hadoop@hadoop-master  : 192.168.31.186 hadoop-slave02
hadoop@hadoop-slave02 : 192.168.31.88 hadoop-master
hadoop@hadoop-slave02 : 192.168.31.234 hadoop-slave01
hadoop@hadoop-slave02 : 192.168.31.186 hadoop-slave02

3 hadoop安裝

3.1 hadoop安裝包下載和上傳

  • 下載hadoop安裝包:hadoop-1.2.1-bin.tar.gz
  • scp [email protected]:/Users/username/Downloads/hadoop-1.2.1-bin.tar.gz ./ (通過polysh從本地機器將安裝包上傳到虛擬機器上,如果你沒有把虛擬機器的id_rsa.pub設定到本地機器的authorized_keys檔案中,你需要輸入本地機器登陸的密碼-建議設定)
  • ls hadoop*
    [email protected] : hadoop-1.2.1-bin.tar.gz
    [email protected] : hadoop-1.2.1-bin.tar.gz
    [email protected] : hadoop-1.2.1-bin.tar.gz
  • 下載jdk安裝包:jdk-8u111-linux-x64.tar.gz
  • scp [email protected]:/Users/username/Downloads/jdk-8u111-linux-x64.tar.gz ./

3.2 hadoop安裝

1.安裝jdk和配置環境變數

#解壓
tar -xzvf jdk-8u111-linux-x64.tar.gz
#設定jdk環境變數path和classpath
echo "export JAVA_HOME=/home/hadoop/workspace/jdk8" >> .bash_profile
echo "export PATH=$JAVA_HOME/bin:$PATH" >> .bash_profile
source .bash_profile
echo "export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar" >> .bash_profile
source .bash_profile

2.安裝hadoop和修改配置

#解壓
tar -xzvf hadoop-1.2.1-bin.tar.gz
#配置環境變數
echo "export HADOOP_HOME=/home/hadoop/workspace/hadoop" >> .bash_profile
source .bash_profile
echo "export PATH=$HADOOP_HOME/bin:$PATH" >> .bash_profile
source .bash_profile

//修改conf中配置檔案

#修改hadoop-env.sh
#修改JAVA_HOME變數

# The java implementation to use.  Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
export JAVA_HOME=/home/hadoop/workspace/jdk8
# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=
#修改core-site.xml 
#設定namenode的地址和埠
#設定hadoop的中間結果和中間日誌檔案等的臨時目錄
<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://hadoop-master:8020</value>
    </property>
    <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/workspace/hadoop_dir/tmp</value>
        </property>
</configuration>
#修改hdfs-site.xml
#配置namenode節點目錄
#配置datanode節點目錄
#配置備份數量
<configuration>
    <property>
        <name>dfs.name.dir</name>
        <value>/home/hadoop/workspace/hadoop_dir/name</value>
    </property>
    <property>
                <name>dfs.data.dir</name>
                <value>/home/hadoop/workspace/hadoop_dir/data</value>
        </property>
    <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
</configuration>
#修改mapred-site.xml
#配置map-reduce中的jobtracker的地址和埠
<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>hadoop-master:8021</value>
    </property>
</configuration>
#修改masters
#配置master機器的hostname列表
hadoop-master
#修改slaves
#配置slave機器的hastname列表
hadoop-slave01
hadoop-slave02

三臺虛擬機器按照同樣的方式配置成功後,hadoop就安裝和配置成功了

hadoop服務啟動

1.啟動hadoop服務

  • ssh [email protected]
  • hadoop namenode -format (這個命令只需要執行一次)
    正常格式化namenode輸出結果如下:
************************************************************/
17/01/07 21:31:32 INFO util.GSet: Computing capacity for map BlocksMap
17/01/07 21:31:32 INFO util.GSet: VM type       = 64-bit
17/01/07 21:31:32 INFO util.GSet: 2.0% max memory = 1013645312
17/01/07 21:31:32 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/01/07 21:31:32 INFO util.GSet: recommended=2097152, actual=2097152
17/01/07 21:31:32 INFO namenode.FSNamesystem: fsOwner=hadoop
17/01/07 21:31:32 INFO namenode.FSNamesystem: supergroup=supergroup
17/01/07 21:31:32 INFO namenode.FSNamesystem: isPermissionEnabled=true
17/01/07 21:31:32 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
17/01/07 21:31:32 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
17/01/07 21:31:32 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
17/01/07 21:31:32 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/01/07 21:31:33 INFO common.Storage: Image file /home/hadoop/workspace/hadoop_dir/name/current/fsimage of size 112 bytes saved in 0 seconds.
17/01/07 21:31:33 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/hadoop/workspace/hadoop_dir/name/current/edits
17/01/07 21:31:33 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/hadoop/workspace/hadoop_dir/name/current/edits
17/01/07 21:31:33 INFO common.Storage: Storage directory /home/hadoop/workspace/hadoop_dir/name has been successfully formatted.
17/01/07 21:31:33 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop-master/192.168.31.88
************************************************************/
  • 設定hadoop-master到hadoop-slave01 hadoop-slave02的ssh免密登入
  • cd workspace/hadoop/bin/
  • sh start-all.sh
    正常啟動輸入日誌如下:
starting namenode, logging to /home/hadoop/workspace/hadoop/libexec/../logs/hadoop-hadoop-namenode-hadoop-master.out
hadoop-slave01: starting datanode, logging to /home/hadoop/workspace/hadoop/libexec/../logs/hadoop-hadoop-datanode-hadoop-slave01.out
hadoop-slave02: starting datanode, logging to /home/hadoop/workspace/hadoop/libexec/../logs/hadoop-hadoop-datanode-hadoop-slave02.out
hadoop-master: starting secondarynamenode, logging to /home/hadoop/workspace/hadoop/libexec/../logs/hadoop-hadoop-secondarynamenode-hadoop-master.out
starting jobtracker, logging to /home/hadoop/workspace/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-hadoop-master.out
hadoop-slave02: starting tasktracker, logging to /home/hadoop/workspace/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-hadoop-slave02.out
hadoop-slave01: starting tasktracker, logging to /home/hadoop/workspace/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-hadoop-slave01.out

可以看出master會啟動:namenode、secondarynamenode、jobtracker三個java程序
slave會啟動:datanode、tasktracker兩個java程序
- jps 檢視是否和日誌中啟動的程序一致
[email protected] : 6549 SecondaryNameNode
[email protected] : 6629 JobTracker
[email protected] : 6358 NameNode
[email protected] : 6874 Jps
[email protected] : 4096 DataNode
[email protected] : 4197 TaskTracker
[email protected] : 4253 Jps
[email protected] : 5177 TaskTracker
[email protected] : 5076 DataNode
[email protected] : 5230 Jps

  • 停止hadoop服務:sh stop-all.sh
stopping jobtracker
hadoop-slave02: stopping tasktracker
hadoop-slave01: stopping tasktracker
stopping namenode
hadoop-slave01: stopping datanode
hadoop-slave02: stopping datanode
hadoop-master: stopping secondarynamenode

4 執行hadoop任務

百度實現一個wordcount的hadoop jar包任務。百度上很多不在此說明

1.上傳本地檔案到hdfs檔案系統中

  • hadoop fs -mkdir input (在hdfs檔案系統上建立input資料夾)
  • hadoop fs -put input/* input (將本地資料夾input中的檔案上傳到hdfs資料夾中)
  • hadoop fs -ls input
[email protected]-master:~/workspace$ hadoop fs -ls input
Found 3 items
-rw-r--r--   3 hadoop supergroup       2416 2017-01-07 22:58 /user/hadoop/input/wiki01
-rw-r--r--   3 hadoop supergroup       3475 2017-01-07 22:58 /user/hadoop/input/wiki02
-rw-r--r--   3 hadoop supergroup       4778 2017-01-07 22:58 /user/hadoop/input/wiki03
  • scp [email protected]:/Users/wenyi/workspace/smart-industry-parent/hadoop-wordcount/target/hadoop-wordcount-1.0-SNAPSHOT.jar ./ (將本地的hadoop程式上傳到虛擬機器)
  • hadoop jar hadoop-wordcount-1.0-SNAPSHOT.jar com.cweeyii.hadoop/WordCount input output (執行hadoop的wordcount單詞計數程式)
    hadoop排程和執行日誌:
[email protected]:~/workspace$ hadoop jar hadoop-wordcount-1.0-SNAPSHOT.jar com.cweeyii.hadoop/WordCount input output
開始執行
input   output
17/01/07 23:24:33 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
17/01/07 23:24:33 INFO util.NativeCodeLoader: Loaded the native-hadoop library
17/01/07 23:24:33 WARN snappy.LoadSnappy: Snappy native library not loaded
17/01/07 23:24:33 INFO mapred.FileInputFormat: Total input paths to process : 3
17/01/07 23:24:34 INFO mapred.JobClient: Running job: job_201701072322_0002
17/01/07 23:24:35 INFO mapred.JobClient:  map 0% reduce 0%
17/01/07 23:24:41 INFO mapred.JobClient:  map 33% reduce 0%
17/01/07 23:24:42 INFO mapred.JobClient:  map 100% reduce 0%
17/01/07 23:24:49 INFO mapred.JobClient:  map 100% reduce 33%
17/01/07 23:24:51 INFO mapred.JobClient:  map 100% reduce 100%
17/01/07 23:24:51 INFO mapred.JobClient: Job complete: job_201701072322_0002
17/01/07 23:24:51 INFO mapred.JobClient: Counters: 30
17/01/07 23:24:51 INFO mapred.JobClient:   Map-Reduce Framework
17/01/07 23:24:51 INFO mapred.JobClient:     Spilled Records=3400
17/01/07 23:24:51 INFO mapred.JobClient:     Map output materialized bytes=20878
17/01/07 23:24:51 INFO mapred.JobClient:     Reduce input records=1700
17/01/07 23:24:51 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=7414968320
17/01/07 23:24:51 INFO mapred.JobClient:     Map input records=31
17/01/07 23:24:51 INFO mapred.JobClient:     SPLIT_RAW_BYTES=309
17/01/07 23:24:51 INFO mapred.JobClient:     Map output bytes=17460
17/01/07 23:24:51 INFO mapred.JobClient:     Reduce shuffle bytes=20878
17/01/07 23:24:51 INFO mapred.JobClient:     Physical memory (bytes) snapshot=584658944
17/01/07 23:24:51 INFO mapred.JobClient:     Map input bytes=10669
17/01/07 23:24:51 INFO mapred.JobClient:     Reduce input groups=782
17/01/07 23:24:51 INFO mapred.JobClient:     Combine output records=0
17/01/07 23:24:51 INFO mapred.JobClient:     Reduce output records=782
17/01/07 23:24:51 INFO mapred.JobClient:     Map output records=1700
17/01/07 23:24:51 INFO mapred.JobClient:     Combine input records=0
17/01/07 23:24:51 INFO mapred.JobClient:     CPU time spent (ms)=1950
17/01/07 23:24:51 INFO mapred.JobClient:     Total committed heap usage (bytes)=498544640
17/01/07 23:24:51 INFO mapred.JobClient:   File Input Format Counters
17/01/07 23:24:51 INFO mapred.JobClient:     Bytes Read=10669
17/01/07 23:24:51 INFO mapred.JobClient:   FileSystemCounters
17/01/07 23:24:51 INFO mapred.JobClient:     HDFS_BYTES_READ=10978
17/01/07 23:24:51 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=285064
17/01/07 23:24:51 INFO mapred.JobClient:     FILE_BYTES_READ=20866
17/01/07 23:24:51 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=7918
17/01/07 23:24:51 INFO mapred.JobClient:   File Output Format Counters
17/01/07 23:24:51 INFO mapred.JobClient:     Bytes Written=7918
17/01/07 23:24:51 INFO mapred.JobClient:   Job Counters
17/01/07 23:24:51 INFO mapred.JobClient:     Launched map tasks=3
17/01/07 23:24:51 INFO mapred.JobClient:     Launched reduce tasks=1
17/01/07 23:24:51 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=9720
17/01/07 23:24:51 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
17/01/07 23:24:51 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=13500
17/01/07 23:24:51 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
17/01/07 23:24:51 INFO mapred.JobClient:     Data-local map tasks=3
  • hadoop fs -ls output/ (檢視執行結果)
# _SUCESS表示執行結果的狀態
# _logs表示執行的日誌
# part-00000表示執行結果
-rw-r--r--   3 hadoop supergroup          0 2017-01-07 23:24 /user/hadoop/output/_SUCCESS
drwxr-xr-x   - hadoop supergroup          0 2017-01-07 23:24 /user/hadoop/output/_logs
-rw-r--r--   3 hadoop supergroup       7918 2017-01-07 23:24 /user/hadoop/output/part-00000
  • hadoop fs -cat output/part-00000 (最終輸出結果)
[email protected]:~/workspace$ hadoop fs -cat output/part-00000
"BeginnerQuestions."    1
"CamelCase" 1
"PopularMusic"  1
"Richard    1
"RichardWagner" 1
"TableOfContents"   1
"WiKi"  1
"Wiki   1
"Wiki"  1
"Wiki").    1
"WikiNode"  1
"by 1
"corrected  1
"edit"  1
"editing    1
"fixed  1
"free   1
"history"   1
"link   1
"native"    1
"plain-vanilla" 1
"popular    1
"pretty"    1
"quick".[5][6][7]   1
"tag"   2
"the    2
"wiki   2