1. 程式人生 > >Hadoop初體驗:快速搭建Hadoop偽分布式環境

Hadoop初體驗:快速搭建Hadoop偽分布式環境

hadoop 偽分布式 大數據

0.前言


本文旨在使用一個全新安裝好的Linux系統從0開始進行Hadoop偽分布式環境的搭建,以達到快速搭建的目的,從而體驗Hadoop的魅力所在,為後面的繼續學習提供基礎環境。

對使用的系統環境作如下說明:

  • 操作系統:CentOS 6.5 64位

  • 主機IP地址:10.0.0.131/24

  • 主機名:leaf

  • 用戶名:root

  • hadoop版本:2.6.5

  • jdk版本:1.7

可以看到,這裏直接使用root用戶,而不是按照大多數的教程創建一個hadoop用戶來進行操作,就是為了達到快速搭建Hadoop環境以進行體驗的目的。

為了保證後面的操作能夠正常完成,請先確認本機是否可以解析到主機名leaf,如果不能,請手動添加解析到/etc/hosts目錄中:

[[email protected] ~]# echo "127.0.0.1  leaf" >> /etc/hosts
[[email protected] ~]# ping leaf
PING leaf (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.043 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.048 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.046 ms




1.rsync軟件安裝


使用下面命令安裝:

[[email protected] ~]# yum install -y rsync




2.ssh安裝與免密碼登陸配置


(1)ssh安裝

使用下面命令安裝

[[email protected] ~]# yum install -y openssh-server openssh-clients


(2)ssh免密碼登陸配置

因為Hadoop使用ssh協議來管理遠程守護進程,所以需要配置免密碼登陸。


  • 關閉防火墻和selinux

為了確保能夠成功配置,在配置前,先把防火墻和selinux關閉:

# 關閉防火墻
[[email protected] ~]# /etc/init.d/iptables stop
[[email protected] ~]# chkconfig --level 3 iptables off

# 關閉selinux
[[email protected] ~]# setenforce 0
[[email protected] ~]# sed -i s/SELINUX=enforcing/SELINUX=disabled/g /etc/selinux/config 
[[email protected] ~]# cat /etc/selinux/config | grep disabled
#     disabled - No SELinux policy is loaded.
SELINUX=disabled


  • 生成密鑰對

[[email protected] ~]# mkdir .ssh
[[email protected] ~]# ssh-keygen -t dsa -P ‘‘ -f .ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in .ssh/id_dsa.
Your public key has been saved in .ssh/id_dsa.pub.
The key fingerprint is:
5b:af:7c:45:f3:ff:dc:50:f5:81:4b:1e:5c:c1:86:90 [email protected]
The key‘s randomart image is:
+--[ DSA 1024]----+
|           .o oo.|
|           E..oo |
|             =...|
|            o = +|
|        S .  + oo|
|         o .  ...|
|        .   ... .|
|         . ..  oo|
|          o.    =|
+-----------------+


  • 將公鑰添加到本地信任列表

[[email protected] ~]# cat .ssh/id_dsa.pub >> .ssh/authorized_keys


  • 驗證

上面三步完成後就完成了免密碼登陸的配置,可以使用下面的命令進行驗證:

[[email protected] ~]# ssh localhost
The authenticity of host ‘localhost (::1)‘ can‘t be established.
RSA key fingerprint is d1:0d:ed:eb:e7:d1:2f:02:23:70:ef:11:14:4e:fa:42.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘localhost‘ (RSA) to the list of known hosts.
Last login: Wed Aug 30 04:28:01 2017 from 10.0.0.1
[[email protected] ~]#

在第一次登陸的時候需要輸入yes,之後再登陸時就可以直接登陸了:

[[email protected] ~]# ssh localhost
Last login: Wed Aug 30 04:44:02 2017 from localhost
[[email protected] ~]#




3.jdk安裝與配置


(1)jdk下載

這裏使用的是jdk1.7版本,可以到下面的網站進行下載:

http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html

下載完成後,可以使用winscp上傳到/root目錄下,如下:

[[email protected] ~]# ls -lh jdk-7u80-linux-x64.tar.gz 
-rw-r--r--. 1 root root 147M Aug 29 12:05 jdk-7u80-linux-x64.tar.gz


(2)jdk安裝

將jdk解壓到/usr/local目錄下,並創建軟鏈接:

[[email protected] ~]# cp jdk-7u80-linux-x64.tar.gz /usr/local/
[[email protected] ~]# cd /usr/local/
[[email protected] local]# tar -zxf jdk-7u80-linux-x64.tar.gz 
[[email protected] local]# ls -ld jdk1.7.0_80/
drwxr-xr-x. 8 uucp 143 4096 Apr 11  2015 jdk1.7.0_80/
[[email protected] local]# ln -s jdk1.7.0_80/ jdk
[[email protected] local]# ls -ld jdk
lrwxrwxrwx. 1 root root 12 Aug 30 04:56 jdk -> jdk1.7.0_80/


(3)JAVA_HOME環境變量配置

java命令在/usr/local/jdk/bin目錄下:

[[email protected] local]# cd jdk/bin/
[[email protected] bin]# ls -lh java
-rwxr-xr-x. 1 uucp 143 7.6K Apr 11  2015 java

配置java環境變量:

[[email protected] bin]# echo ‘export JAVA_HOME=/usr/local/jdk/bin‘ >> /etc/profile
[[email protected] bin]# echo ‘export PATH=$PATH:$JAVA_HOME‘ >> /etc/profile
[[email protected] bin]# source /etc/profile

這樣我們就可以在任何一個目錄下使用java相關的命令:

[[email protected] ~]# java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
[[email protected] ~]# javac -version
javac 1.7.0_80




4.hadoop安裝與配置


(1)hadoop下載

這裏使用hadoop 2.6.5版本,可以到下面的網站進行下載:

http://hadoop.apache.org/releases.html

選擇2.6.5的binary進入相應的頁面便可以下載,然後使用winscp上傳到/root目錄下,如下:

[[email protected] ~]# ls -lh hadoop-2.6.5.tar.gz 
-rw-r--r--. 1 root root 191M Aug 29 19:09 hadoop-2.6.5.tar.gz


(2)hadoop安裝

將hadoop解壓到/usr/local目錄下,並創建軟鏈接:

[[email protected] ~]# cp hadoop-2.6.5.tar.gz /usr/local
[[email protected] ~]# cd /usr/local
[[email protected] local]# tar -zxf hadoop-2.6.5.tar.gz 
[[email protected] local]# ls -ld hadoop-2.6.5
drwxrwxr-x. 9 1000 1000 4096 Oct  3  2016 hadoop-2.6.5
[[email protected] local]# ln -s hadoop-2.6.5 hadoop
[[email protected] local]# ls -ld hadoop
lrwxrwxrwx. 1 root root 12 Aug 30 05:05 hadoop -> hadoop-2.6.5


(3)hadoop環境變量配置

hadoop相關命令在/usr/local/hadoop/bin和/usr/local/hadoop/sbin目錄下,如下所示:

[[email protected] local]# cd hadoop/bin/
[[email protected] bin]# ls -lh hadoop
-rwxr-xr-x. 1 1000 1000 5.4K Oct  3  2016 hadoop

配置hadoop環境變量:

[[email protected] bin]# echo ‘export HADOOP_HOME=/usr/local/hadoop/bin:/usr/local/hadoop/sbin‘ >> /etc/profile
[[email protected] bin]# echo ‘export PATH=$PATH:$HADOOP_HOME‘ >> /etc/profile

這樣我們就可以在任何一個目錄下使用hadoop相關的命令:

[[email protected] ~]# hadoop
Usage: hadoop [--config confdir] COMMAND
       where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
  credential           interact with credential providers
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
  trace                view and modify Hadoop tracing settings
 or
  CLASSNAME            run the class named CLASSNAME

Most commands print help when invoked w/o parameters.


(4)hadoop配置

hadoop的配置文件在/usr/local/hadoop/etc/hadoop目錄下:

[[email protected] ~]# cd /usr/local/hadoop/etc/hadoop/
[[email protected] hadoop]# ls
capacity-scheduler.xml      hadoop-policy.xml        kms-log4j.properties        ssl-client.xml.example
configuration.xsl           hdfs-site.xml            kms-site.xml                ssl-server.xml.example
container-executor.cfg      httpfs-env.sh            log4j.properties            yarn-env.cmd
core-site.xml               httpfs-log4j.properties  mapred-env.cmd              yarn-env.sh
hadoop-env.cmd              httpfs-signature.secret  mapred-env.sh               yarn-site.xml
hadoop-env.sh               httpfs-site.xml          mapred-queues.xml.template
hadoop-metrics2.properties  kms-acls.xml             mapred-site.xml.template
hadoop-metrics.properties   kms-env.sh               slaves
  • 配置core-site.xml

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

fs.default.name這個字段下的值用於指定NameNode(HDFS的Master)的IP地址和端口號,如下面的value值hdfs://localhost:9000,就表示HDFS NameNode的IP地址或主機為localhost,端口號為9000.

  • 配置hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/home/nuoline/hdfs-filesystem/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/home/nuoline/hdfs-filesystem/data</value>
    </property>
</configuration>

dfs.replication用於指定HDFS中每個Block塊被復制的次數,起到數據冗余備份的作用;dfs.name.dir用於配置HDFS的NameNode的元數據,以逗號隔開,HDFS會把元數據冗余復制到這些目錄下;dfs.data.dir用於配置HDFS的DataNode的數據目錄,以逗號隔開,HDFS會把數據存在這些目錄下。

  • 配置mapred-site.xml

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>localhost:9001</value>
    </property>
</configuration>

mapred.job.tracker字段用於指定MapReduce Jobtracker的IP地址及端口號,如這裏IP地址或主機為localhost,9001是MapReduce Jobtracker RPC的交互端口。

  • 配置hadoop-env.sh

export JAVA_HOME=/usr/local/jdk




5.hadoop啟動與測試


(1)格式化HDFS分布式文件系統

執行如下命令:

[[email protected] ~]# hadoop namenode -format
...
17/08/30 08:41:29 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/08/30 08:41:29 INFO util.ExitUtil: Exiting with status 0
17/08/30 08:41:29 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at leaf/127.0.0.1
************************************************************/

註意看輸出顯示是不是跟上面的類似,如果是,則說明操作成功。


(2)啟動hadoop服務

執行如下命令:

[[email protected] ~]# start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
17/08/30 08:53:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-namenode-leaf.out
localhost: starting datanode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-datanode-leaf.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.5/logs/hadoop-root-secondarynamenode-leaf.out
17/08/30 08:53:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-resourcemanager-leaf.out
localhost: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-leaf.out


(3)hadoop服務測試

啟動完成後,執行jps命令,可以看到hadoop運行的守護進程,如下:

[[email protected] ~]# jps
4167 SecondaryNameNode
4708 Jps
3907 NameNode
4394 NodeManager
4306 ResourceManager
3993 DataNode

也可以通過在瀏覽器中輸入地址來訪問相關頁面,這裏訪問NameNode的頁面,地址為http://10.0.0.131:50070,如下:

技術分享 訪問DataNode的頁面,地址為http://10.0.0.131:50075,如下

技術分享



6.參考資料


《Hadoop核心技術》

不過需要註意的是,書上版本用的是1.x,這裏用的是2.x版本。


本文出自 “香飄葉子” 博客,請務必保留此出處http://xpleaf.blog.51cto.com/9315560/1960982

Hadoop初體驗:快速搭建Hadoop偽分布式環境