1. 程式人生 > >CentOS 6.5單節點編譯安裝hadoop-2.2.0

CentOS 6.5單節點編譯安裝hadoop-2.2.0

這幾天在琢磨Hadoop,首先是安裝Hadoop,在安裝過程中出現過不少問題,現在將整個過程總結一下,網路上已經有很多這方面的資料了,但我覺得還是有必要記述一下部分重要安裝過程,方便以後發現與解決問題,也希望能給需要的朋友一些參考。

   我所用系統是CentOS6.5 64bit,編譯安裝hadoop-2.2.0hadoop配置為單節點。在ApacheHadoop下載的hadoop-2.2.0有兩種版本:1)編譯版本:hadoop-2.2.0.tar.gz,2)原始碼版本:hadoop-2.2.0-src.tar.gz。對於編譯版本,解壓縮後進行配置就可以使用了,而對於原始碼版本,先要編譯,然後再進行配置。

    我第一次裝hadoop-2.2.0使用的是編譯版本,但由於我的系統是64位,自帶的native-hadooplibrary不能使用(自帶的庫適用於32位系統,但不適用於64位系統),總是出現下列提示:”WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes whereapplicable”(問題綜述3.2),因此我採用原始碼重新編譯安裝。

  1編譯hadoop-2.2.0原始碼

    1.1編譯前環境準備

    我編譯原始碼過程中參考的是這個篇博文

:

  • yum安裝:java,gcc,gcc-c++,make,lzo-devel,zlib-devel,gcc,autoconf,automake,libtool,ncurses-devel,openssl-devel
  • 手動安裝:Maven,ProtocolBuffer

  對於yum安裝的軟體或者依賴包,在CentOS 6.5中大部分可能都已經預先安裝了,可以先先檢查一下是否安裝或者更新:yum info package,如果需要安裝:yum -y install package,如果有可用更新:yum -y update package

    對於手動安裝中的軟體,需要先下載軟體包,然後再安裝,使用的具體版本是

protobuf-2.5.0.tar.gz(,官方網站被wall了可以選擇這個下載)apache-maven-3.0.5-bin.zip(mirror.bit.edu.cn/apache/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.zip)protobuf需要採用原始碼安裝;maven是編譯版本,只需配置環境變數。但請注意:不要使用Maven3.1.1,因為Maven3.1.1Maven3.0.x存在相容性問題,不能成功下載外掛,會出現問題maven-"ServiceUnavailable"。建議使用oschinamaven映象,因為國外的某些網站可能會被Wall。這兩個軟體的安裝都可參考上面的部落格。

    安裝完上面所列出的軟體或者依賴包後,需要配置系統環境變數,讓系統能夠找到軟體所對應的命令,以下是我在/root/.bashrc中新增的配置:

export JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64"
export CLASSPATH=.:${JAVA_HOME}/lib/:${JAVA_HOME}/jre/lib/
export PATH=${JAVA_HOME}/bin:${JAVA_HOME}/jre/bin:$PATH
export MAVEN_HOME="/usr/local/src/apache-maven-3.0.5"
export PATH=$PATH:$MAVEN_HOME/bin
export PROTOBUF_HOME="/usr/local/protobuf"
export PATH=$PATH:$PROTOBUF_HOME/bin

    在/root/.bashrc配置環境變數後需要使用命令source/root/.bashrc載入配置。檢測javamaven是否安裝成功,出現下列資訊表示安裝成功。

[[email protected]~]# java -version
javaversion "1.7.0_71"
OpenJDKRuntime Environment (rhel-2.5.3.1.el6-x86_64 u71-b14)
OpenJDK64-Bit Server VM (build 24.65-b04, mixed mode)
[[email protected]~]# mvn -version
ApacheMaven 3.0.5 (r01de14724cdef164cd33c7c8c2fe155faf9602da; 2013-02-1921:51:28+0800)
Mavenhome: /usr/local/src/apache-maven-3.0.5
Javaversion: 1.7.0_71, vendor: Oracle Corporation
Javahome: /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64/jre
Defaultlocale: en_US, platform encoding: UTF-8
OSname: "linux", version: "2.6.32-431.29.2.el6.x86_64",arch: "amd64", family: "unix"

    1.2編譯hadoop

    下載hadoop-2.2.0(http://apache.fastbull.org/hadoop/common/hadoop-2.2.0/hadoop-2.2.0-src.tar.gz,官方已經不支援下載,可以在這裡下載)原始碼,hadoop-2.2.0SourceCode壓縮包解壓出來的原始碼有個bug需要patch後才能編譯,參考:

    一切準備就緒,開始編譯:

cd /home/xxx/softwares/hadoop/hadoop-2.2.0-src
mvn package -Pdist,native -DskipTests -Dtar

需要等待一段時間,編譯成功之後的結果如下(maven使用預設映象):

[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main................................ SUCCESS [2.109s]
[INFO] Apache Hadoop Project POM......................... SUCCESS [1.828s]
[INFO] Apache Hadoop Annotations......................... SUCCESS [5.266s]
[INFO] Apache Hadoop Assemblies.......................... SUCCESS [0.228s]
[INFO] Apache Hadoop Project Dist POM.................... SUCCESS [2.184s]
[INFO] Apache Hadoop Maven Plugins....................... SUCCESS [3.562s]
[INFO] Apache Hadoop Auth................................ SUCCESS [3.128s]
[INFO] Apache Hadoop Auth Examples....................... SUCCESS [2.444s]
[INFO] Apache Hadoop Common.............................. SUCCESS [1:17.748s]
[INFO] Apache Hadoop NFS................................. SUCCESS [16.455s]
[INFO] Apache Hadoop Common Project...................... SUCCESS [0.056s]
[INFO] Apache Hadoop HDFS................................ SUCCESS [2:18.736s]
[INFO] Apache Hadoop HttpFS.............................. SUCCESS [18.687s]
[INFO] Apache Hadoop HDFS BookKeeper Journal............. SUCCESS [23.553s]
[INFO] Apache Hadoop HDFS-NFS............................ SUCCESS [3.453s]
[INFO] Apache Hadoop HDFS Project........................ SUCCESS [0.046s]
[INFO] hadoop-yarn....................................... SUCCESS [48.652s]
[INFO] hadoop-yarn-api................................... SUCCESS [44.591s]
[INFO] hadoop-yarn-common................................ SUCCESS [30.677s]
[INFO] hadoop-yarn-server................................ SUCCESS [0.096s]
[INFO] hadoop-yarn-server-common......................... SUCCESS [9.340s]
[INFO] hadoop-yarn-server-nodemanager.................... SUCCESS [16.656s]
[INFO] hadoop-yarn-server-web-proxy...................... SUCCESS [3.115s]
[INFO] hadoop-yarn-server-resourcemanager................ SUCCESS [13.133s]
[INFO] hadoop-yarn-server-tests.......................... SUCCESS [0.614s]
[INFO] hadoop-yarn-client................................ SUCCESS [4.646s]
[INFO] hadoop-yarn-applications.......................... SUCCESS [0.100s]
[INFO] hadoop-yarn-applications-distributedshell......... SUCCESS [2.815s]
[INFO] hadoop-mapreduce-client........................... SUCCESS [0.096s]
[INFO] hadoop-mapreduce-client-core...................... SUCCESS [23.624s]
[INFO]hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [2.056s]
[INFO] hadoop-yarn-site.................................. SUCCESS [0.099s]
[INFO] hadoop-yarn-project............................... SUCCESS [11.009s]
[INFO] hadoop-mapreduce-client-common.................... SUCCESS [20.053s]
[INFO] hadoop-mapreduce-client-shuffle................... SUCCESS [3.310s]
[INFO] hadoop-mapreduce-client-app....................... SUCCESS [9.819s]
[INFO] hadoop-mapreduce-client-hs........................ SUCCESS [4.843s]
[INFO] hadoop-mapreduce-client-jobclient................. SUCCESS [6.115s]
[INFO] hadoop-mapreduce-client-hs-plugins................ SUCCESS [1.682s]
[INFO] Apache Hadoop MapReduce Examples.................. SUCCESS [6.336s]
[INFO] hadoop-mapreduce.................................. SUCCESS [3.946s]
[INFO] Apache Hadoop MapReduce Streaming................. SUCCESS [4.788s]
[INFO] Apache Hadoop Distributed Copy.................... SUCCESS [8.510s]
[INFO] Apache Hadoop Archives............................ SUCCESS [2.061s]
[INFO] Apache Hadoop Rumen............................... SUCCESS [7.269s]
[INFO] Apache Hadoop Gridmix............................. SUCCESS [4.815s]
[INFO] Apache Hadoop Data Join........................... SUCCESS [3.659s]
[INFO] Apache Hadoop Extras.............................. SUCCESS [3.132s]
[INFO] Apache Hadoop Pipes............................... SUCCESS [9.350s]
[INFO] Apache Hadoop Tools Dist.......................... SUCCESS [1.850s]
[INFO] Apache Hadoop Tools............................... SUCCESS [0.023s]
[INFO] Apache Hadoop Distribution........................ SUCCESS [19.184s]
[INFO] Apache Hadoop Client.............................. SUCCESS [6.730s]
[INFO] Apache Hadoop Mini-Cluster........................ SUCCESS [0.192s]
[INFO]------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO]------------------------------------------------------------------------
[INFO] Total time: 10:40.193s
[INFO] Finished at: Fri Nov 21 14:43:06 CST 2014
[INFO] Final Memory: 131M/471M
[INFO]------------------------------------------------------------------------

編譯後的檔案為hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0

  2.單節點安裝hadoop

   以下采用single-node模式在CentOS6.5 64bits中安裝hadoop-2.2.0

    2.1建立使用者組及新增使用者

[[email protected] Desktop]#groupadd hadoopgroup
[[email protected] Desktop]#useradd hadoopuser
[[email protected] Desktop]#passwd hadoopuserID-0217ef09-5d44-44dc-815a-f0e0569e0
Changing password foruser hadoopuser.
New password:
Retype new password:
passwd: allauthentication tokens updated successfully.
[[email protected] Desktop]#usermod -g hadoopgroup hadoopuser


    2.2.安裝和配置SSH

       Hadoop使用SSH的方式管理其節點,即使在single-node方式中也需要對其進行配置。否則會出現connectionrefused on port 22”錯誤。在此之前,請確保您已經安裝了SSH,如果沒有,可以使用yum install openssh-server安裝。

    為hadoop使用者產生一個SSH金鑰,以後無需密碼即可登入到hadoop節點:

   注意:這步是切換到hadoopuser後執行的

[[email protected] ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoopuser/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoopuser/.ssh/id_rsa.
Your public key has been saved in /home/hadoopuser/.ssh/id_rsa.pub.
The key fingerprint is:
0b:6e:2f:89:a5:42:42:40:b2:69:fc:3f:4c:84:33:eb [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
|o.               |
|+o  .            |
|+o + .           |
|... =            |
|.  o .. S        |
|. o +... .       |
| o E Bo..        |
|  . o.+.         |
|   .   ..        |
+-----------------+
    使新建立的金鑰能夠訪問本地主機:
[[email protected] ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys


    2.3設定安裝檔案許可權

   我將hadoop安裝在/usr/local,將編譯後的檔案hadoop-2.2.0複製到/usr/local中,修改所有者:

cp -R /home/xxx/softwares/hadoop/hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0 /usr/local/
cd /usr/local/
mv hadoop-2.2.0/ hadoop
chown -R hadoopuser:hadoopgroup hadoop/

    2.4建立HDFS路徑

cd /usr/local/hadoop/
mkdir -p data/namenode
mkdir -p data/datanode
mkdir -p data/secondarydatanode

  2.5配置hadoop-env.sh

    在檔案/usr/local/hadoop/etc/hadoop/hadoop-env.sh中,新增

export JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64"
export HADOOP_HOME="/usr/local/hadoop"
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="$HADOOP_OPTS-Djava.library.path=/usr/local/hadoop/lib/native"
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

  注意:要將原配置檔案中下列的兩行遮蔽或者直接刪除,否則不能載入成功native-hadooplibrary

export JAVA_HOME=${JAVA_HOME}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

2.6配置core-site.xml

    在檔案/usr/local/hadoop/etc/hadoop/core-site.xml(<configuration>標籤內)新增:

<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
</property>

   2.7配置hdfs-site.xml

    在檔案/usr/local/hadoop/etc/hadoop/hdfs-site.xml(<configuration>標籤內)新增:

<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
<property>
    <name>dfs.name.dir</name>
    <value>file:///usr/local/hadoop/data/namenode</value>
</property>
<property>
    <name>fs.checkpoint.dir</name>
    <value>file:///usr/local/hadoop/data/secondarynamenode</value>
</property>
<property>
    <name>dfs.data.dir</name>
    <value>file:///usr/local/hadoop/data/datanode</value>
</property>



   2.8配置yarn-site.xml

    在檔案/usr/local/hadoop/etc/hadoop/yarn-site.xml(<configuration>標籤內)新增:

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>


  2.9配置mapred-site.xml

    建立mapred-site.xml檔案:

cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

    在檔案/usr/local/hadoop/etc/hadoop/mapred-site.xml(<configuration>標籤內)新增:

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>
<property>
    <name>mapreduce.job.tracker</name>
    <value>localhost:8021</value>
</property>
<property>
    <name>mapreduce.local.dir</name>
    <value>file:///usr/local/hadoop/data/mapreduce</value>
</property>



  2.10新增hadoop可執行路徑

    在/home/hadoopuser/.bashrc新增hadoop可執行路徑:

echo "exportPATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin" >> /home/hadoopuser/.bashrc
source ~/.bashrc


  2.11格式化HDFS


[[email protected] hadoop]$ hdfs namenode -format
14/11/22 13:00:18 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = lls.pc/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.2.0
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = Unknown -r Unknown; compiled by 'root' on 2014-11-21T06:32Z
STARTUP_MSG:   java = 1.7.0_71
************************************************************/
14/11/22 13:00:18 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
Formatting using clusterid: CID-bf1d252c-2710-45e6-af26-344debf86840
14/11/22 13:00:19 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/11/22 13:00:19 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/11/22 13:00:19 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/11/22 13:00:19 INFO util.GSet: Computing capacity for map BlocksMap
14/11/22 13:00:19 INFO util.GSet: VM type       = 64-bit
14/11/22 13:00:19 INFO util.GSet: 2.0% max memory = 889 MB
14/11/22 13:00:19 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/11/22 13:00:19 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/11/22 13:00:19 INFO blockmanagement.BlockManager: defaultReplication         = 1
14/11/22 13:00:19 INFO blockmanagement.BlockManager: maxReplication             = 512
14/11/22 13:00:19 INFO blockmanagement.BlockManager: minReplication             = 1
14/11/22 13:00:19 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/11/22 13:00:19 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/11/22 13:00:19 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/11/22 13:00:19 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/11/22 13:00:19 INFO namenode.FSNamesystem: fsOwner             = hadoopuser (auth:SIMPLE)
14/11/22 13:00:19 INFO namenode.FSNamesystem: supergroup          = supergroup
14/11/22 13:00:19 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/11/22 13:00:19 INFO namenode.FSNamesystem: HA Enabled: false
14/11/22 13:00:19 INFO namenode.FSNamesystem: Append Enabled: true
14/11/22 13:00:19 INFO util.GSet: Computing capacity for map INodeMap
14/11/22 13:00:19 INFO util.GSet: VM type       = 64-bit
14/11/22 13:00:19 INFO util.GSet: 1.0% max memory = 889 MB
14/11/22 13:00:19 INFO util.GSet: capacity      = 2^20 = 1048576 entries
14/11/22 13:00:19 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/11/22 13:00:19 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/11/22 13:00:19 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/11/22 13:00:19 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/11/22 13:00:19 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/11/22 13:00:19 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/11/22 13:00:19 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/11/22 13:00:19 INFO util.GSet: VM type       = 64-bit
14/11/22 13:00:19 INFO util.GSet: 0.029999999329447746% max memory = 889 MB
14/11/22 13:00:19 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /usr/local/hadoop/data/namenode ? (Y or N) y
14/11/22 13:00:39 INFO common.Storage: Storage directory /usr/local/hadoop/data/namenode has been successfully formatted.
14/11/22 13:00:39 INFO namenode.FSImage: Saving image file /usr/local/hadoop/data/namenode/current/fsimage.ckpt_0000000000000000000 using no compression
14/11/22 13:00:39 INFO namenode.FSImage: Image file /usr/local/hadoop/data/namenode/current/fsimage.ckpt_0000000000000000000 of size 202 bytes saved in 0 seconds.
14/11/22 13:00:39 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/11/22 13:00:39 INFO util.ExitUtil: Exiting with status 0
14/11/22 13:00:39 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at lls.pc/127.0.0.1
************************************************************/

    2.12啟動hadoop

[[email protected]]# su hadoopuser
[[email protected]]$ start-dfs.sh && start-yarn.sh
Startingnamenodes on [localhost]
localhost:starting namenode, logging to/usr/local/hadoop/logs/hadoop-hadoopuser-namenode-lls.pc.out
localhost:starting datanode, logging to/usr/local/hadoop/logs/hadoop-hadoopuser-datanode-lls.pc.out
Startingsecondary namenodes [0.0.0.0]
0.0.0.0:starting secondarynamenode, logging to/usr/local/hadoop/logs/hadoop-hadoopuser-secondarynamenode-lls.pc.out
startingyarn daemons
startingresourcemanager, logging to/usr/local/hadoop/logs/yarn-hadoopuser-resourcemanager-lls.pc.out
localhost:starting nodemanager, logging to/usr/local/hadoop/logs/yarn-hadoopuser-nodemanager-lls.pc.out
[[email protected]]# 

 2.13檢視hadoop狀態

    檢視hadoop守護程序:

[[email protected] data]$ jps
13466 Jps
18277 ResourceManager
17952 DataNode
18126 SecondaryNameNode
18394 NodeManager
17817 NameNode 

    最左邊的數字表示java程序的PID(啟動hadoop時動態分配)DataNode,NameNode,NodeManager,SecondaryNameNode,ResourceManagerhadoop的守護程序。

     HDFS內建了許多web服務,使用者藉助於這些服務,可以通過瀏覽器來檢視HDFS的執行狀況。可以通過瀏覽器檢視更詳細hadoop狀態

  • Cluster status  http://localhost:8088/cluster

  • HDFS status http://localhost:50070/dfshealth.jsp