1. 程式人生 > >Hadoop2.2.0偽分散式環境搭建(附:64位下編譯Hadoop-2.2.0過程)

Hadoop2.2.0偽分散式環境搭建(附:64位下編譯Hadoop-2.2.0過程)

Hadoop2.2.0偽分散式環境搭建:

寫在前面:Hadoop2.2.0預設是支援32位的OS,如果想要在64OS下執行的話,可以通過在64OS下面編譯Hadoop2.2.0來實現,編譯的操作步驟在最後面呈現。

1

操作:下載軟體;

檔案:Hadoop-2.2.0.tar.gz

步驟:

或者使用編譯過後的支援64OS的包

    hadoop-2.2.0.x86_64.tar.gz_免費高速下載|百度雲網盤-分享無限制

2

操作:設定環境變數;

檔案:/etc/profile

步驟:

     sudovim /etc/profile

向檔案尾部新增以下內容:

<span style="font-size:18px;">export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64

export JRE_HOME=$JAVA_HOME/jre

export HADOOP_HOME2=/home/rocketeer/Hadoop/hadoop-2.2.0

export PATH=.:$JAVA_HOME/bin:$HADOOP_HOME2/bin:$HADOOP_HOME2/sbin:$PATH

export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/jre/lib:$CLASSPATH

 

export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native

export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"
</span>

3

操作:設定主機名(本條必須配置,不然在格式化namenode的時候會報不能識別主機名的錯誤)

檔案:/etc/hostname  

      /etc/hosts

步驟:

    sudo vim /etc/hostname

開啟vim編輯器,把裡面的內容修改為:

     rocketeer

sudovim /etc/hosts

將開頭兩行改為以下內容

<span style="font-size:18px;">     127.0.0.1  localhost.localdomain localhost

     127.0.0.1  new-hostname.localdomain new-hostname
</span>

或者改為

<span style="font-size:18px;">     127.0.0.1  localhost

     192.168.159.148 rocketeer
</span>

驗證:

然後重啟虛擬機器即可,重啟之後輸入命令檢視是否修改成功:

     Hostname

     Hostname –f

輸入以上兩條命令,看是否都返回最新的主機名rocketeer

4

操作:設定無密碼登陸SSH

步驟:

sudo apt-getinstall ssh

安裝完成後會在~目錄(當前使用者主目錄,即這裡的/home/hduser)下產生一個隱藏資料夾.ssh

ls  -a 可以檢視隱藏檔案)。如果沒有這個檔案,自己新建即可:

mkdir .ssh

cd .ssh

ssh-keygen -t rsa之後一路回車(產生祕鑰)

id_rsa.pub 追加到授權的 key 裡面去

cat id_rsa.pub >> authorized_keys

重啟 SSH服務命令使其生效

service ssh restart

最後執行

ssh localhost

實現無密碼連線localhost

5

操作:hadoop配置檔案修改

檔案:/home/rocketeer/Hadoop/hadoop-2.2.0/etc/hadoop下面的檔案

      core-site.xml  hadoop-env.sh hdfs-site.xml  masters  slaves mapred-site.xml 

      yarn-site.xml

配置檔案的新增和修改都在<configuration></configuration>

步驟:

     cd Hadoop/hadoop-2.2.0/etc/hadoop

1)通過core-site.xml配置namenodetmp目錄

     sudo vim core-site.xml

新增以下內容

<span style="font-size:18px;">     <configuration>

         <property>

           <name>fs.defaultFS</name>

           <value>hdfs://localhost:9000</value>

         </property>

 

         <property>

           <name>hadoop.tmp.dir</name>

            <value>/home/rocketeer/Hadoop/hadoop2_tmp</value>

         </property>

     </configuration>
</span>

2)通過hadoop-env.sh配置java的路徑(必須配置)

<span style="font-size:18px;">  export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64</span>

3)通過hdfs-site.xml配置namenode datanode儲存路徑

     sudo vim hdfs-site.xml

新增以下內容

<span style="font-size:18px;">     <property>

           <name>dfs.namenode.name.dir</name>

           <value>${hadoop.tmp.dir}/namenode</value>

         </property>

        

         <property>

          <name>dfs.datanode.data.dir</name>

           <value>${hadoop.tmp.dir}/datanode</value>

         </property>
</span>

4)通過mapred-site.xmlhadoop2.0有了yarn所以原來的mapred配置都轉向yarn-site.xml檔案中了,這裡也就指定yarn

    mvmapred-site.xml.template mapred-site.xml

    sudo vim mapred-site.xml

新增以下內容

<span style="font-size:18px;">      <property>

         <name>mapreduce.job.tracker</name>

         <value>http://127.0.0.1:9001</value>

     </property>

 

     <property>

         <name>mapreduce.framework.name</name>

         <value>yarn</value>

     </property>

 

     <property>

         <name>mapreduce.system.dir</name>

         <value>/mapred/system</value>

         <final>true</final>

     </property>

 

     <property>

         <name>mapred.local.dir</name>

         <value>/mapred/local</value>

         <final>true</final>

     </property>
</span>

5yarn-site.xml為了簡單,快速做測試,使用預設的即可。

6masters其實這裡可以不配置,其實是指定secondenamenode

7slaves指定datanodetasknode

8)由於是偽分散式,所以masterslaves都是localhost,如果是真的分散式,這裡只想的主機是不一樣的,可以和連結裡面的部落格對比

6

操作:格式化namenode

步驟:

    Cd Hadoop/hadoop-2.2.0/bin

在這個目錄下面輸入命令:

    hdfs namenode –format

顯示下列資訊,則說明格式化成功:

<span style="font-size:18px;">   [email protected]:~/Hadoop/hadoop-2.2.0/bin$ hdfs namenode -format

14/07/12 23:02:20 INFOnamenode.NameNode: STARTUP_MSG: 

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = rocketeer/202.106.199.36

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 2.2.0

STARTUP_MSG:   classpath =/home/rocketeer/Hadoop/hadoop-2.2.0/etc/hadoop:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/rocketeer/Hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/contrib/capacity-scheduler/*.jar

STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common-r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z

STARTUP_MSG:   java = 1.7.0_55

************************************************************/

14/07/12 23:02:20 INFOnamenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]

OpenJDK 64-Bit Server VM warning:You have loaded library/home/rocketeer/Hadoop/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which mighthave disabled stack guard. The VM will try to fix the stack guard now.

It's highly recommended that youfix the library with 'execstack -c <libfile>', or link it with '-znoexecstack'.

14/07/12 23:02:48 WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable

14/07/12 23:02:49 WARN common.Util:Path $HADOOP_HDFS2/namenode should be specified as a URI in configurationfiles. Please update hdfs configuration.

14/07/12 23:02:49 WARN common.Util:Path $HADOOP_HDFS2/namenode should be specified as a URI in configurationfiles. Please update hdfs configuration.

Formatting using clusterid:CID-e6deb9bd-356a-4dea-ba0e-d3af695a6c52

14/07/12 23:02:49 INFOnamenode.HostFileManager: read includes:

HostSet(

)

14/07/12 23:02:50 INFOnamenode.HostFileManager: read excludes:

HostSet(

)

14/07/12 23:02:50 INFOblockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000

14/07/12 23:02:50 INFO util.GSet:Computing capacity for map BlocksMap

14/07/12 23:02:50 INFO util.GSet:VM type       = 64-bit

14/07/12 23:02:50 INFO util.GSet:2.0% max memory = 966.7 MB

14/07/12 23:02:50 INFO util.GSet:capacity      = 2^21 = 2097152 entries

14/07/12 23:02:50 INFOblockmanagement.BlockManager: dfs.block.access.token.enable=false

14/07/12 23:02:50 INFOblockmanagement.BlockManager: defaultReplication         = 3

14/07/12 23:02:50 INFOblockmanagement.BlockManager: maxReplication             = 512

14/07/12 23:02:50 INFOblockmanagement.BlockManager: minReplication             = 1

14/07/12 23:02:50 INFOblockmanagement.BlockManager: maxReplicationStreams      = 2

14/07/12 23:02:50 INFOblockmanagement.BlockManager: shouldCheckForEnoughRacks  = false

14/07/12 23:02:50 INFOblockmanagement.BlockManager: replicationRecheckInterval = 3000

14/07/12 23:02:50 INFOblockmanagement.BlockManager: encryptDataTransfer        =false

14/07/12 23:02:50 INFOnamenode.FSNamesystem: fsOwner            = rocketeer (auth:SIMPLE)

14/07/12 23:02:50 INFOnamenode.FSNamesystem: supergroup         = supergroup

14/07/12 23:02:50 INFOnamenode.FSNamesystem: isPermissionEnabled = true

14/07/12 23:02:50 INFOnamenode.FSNamesystem: HA Enabled: false

14/07/12 23:02:50 INFOnamenode.FSNamesystem: Append Enabled: true

14/07/12 23:02:51 INFO util.GSet:Computing capacity for map INodeMap

14/07/12 23:02:51 INFO util.GSet:VM type       = 64-bit

14/07/12 23:02:51 INFO util.GSet:1.0% max memory = 966.7 MB

14/07/12 23:02:51 INFO util.GSet:capacity      = 2^20 = 1048576 entries

14/07/12 23:02:51 INFOnamenode.NameNode: Caching file names occuring more than 10 times

14/07/12 23:02:51 INFO namenode.FSNamesystem:dfs.namenode.safemode.threshold-pct = 0.9990000128746033

14/07/12 23:02:51 INFOnamenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0

14/07/12 23:02:51 INFOnamenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000

14/07/12 23:02:51 INFOnamenode.FSNamesystem: Retry cache on namenode is enabled

14/07/12 23:02:51 INFOnamenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cacheentry expiry time is 600000 millis

14/07/12 23:02:51 INFO util.GSet:Computing capacity for map Namenode Retry Cache

14/07/12 23:02:51 INFO util.GSet:VM type       = 64-bit

14/07/12 23:02:51 INFO util.GSet:0.029999999329447746% max memory = 966.7 MB

14/07/12 23:02:51 INFO util.GSet:capacity      = 2^15 = 32768 entries

Re-format filesystem in StorageDirectory /home/rocketeer/Hadoop/hadoop-2.2.0/bin/$HADOOP_HDFS2/namenode ? (Yor N) y

14/07/12 23:03:46 INFOcommon.Storage: Storage directory/home/rocketeer/Hadoop/hadoop-2.2.0/bin/$HADOOP_HDFS2/namenode has beensuccessfully formatted.

14/07/12 23:03:46 INFOnamenode.FSImage: Saving image file/home/rocketeer/Hadoop/hadoop-2.2.0/bin/$HADOOP_HDFS2/namenode/current/fsimage.ckpt_0000000000000000000using no compression

14/07/12 23:03:46 INFOnamenode.FSImage: Image file /home/rocketeer/Hadoop/hadoop-2.2.0/bin/$HADOOP_HDFS2/namenode/current/fsimage.ckpt_0000000000000000000of size 201 bytes saved in 0 seconds.

14/07/12 23:03:46 INFOnamenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

14/07/12 23:03:46 INFO util.ExitUtil:Exiting with status 0

14/07/12 23:03:46 INFOnamenode.NameNode: SHUTDOWN_MSG: 

/************************************************************

SHUTDOWN_MSG: Shutting downNameNode at rocketeer/202.106.199.36

************************************************************/
</span>
7 (1)啟動hadoop-2.2.0叢集 Start-all.sh (2)分步啟動:

啟動namenode

1      sbin/hadoop-daemon.sh start namenode

2      sbin/hadoop-daemon.sh start datanode

執行測試

1      jps

出現:

12935NameNode

5309 Jps

13012DataNode

證明啟動成功,如果沒有出現DataNode或者NameNode,證明啟動沒有成功,可以檢視hadoop安裝目錄下的logs下的日誌記錄。

可以使用sbin/hadoop-daemon.shstop datanode(或namenode)來關閉。

啟動Manage管理

1      sbin/yarn-daemon.sh start resourcemanager

2      sbin/yarn-daemon.sh start nodemanager

啟動mapreduce history檢視程序

mr-jobhistory-daemon.shstart historyserver

執行測試

1      jps

出現:

13338NodeManager

13111ResourceManager

12935NameNode

5309 Jps

13012DataNode

證明啟動成功同時也可以使用yarn-daemon.shstop resourcemanagernodemanager)來關閉。

如果沒有單獨配置yarn-site.xml中的yarn.resourcemanager.webapp.address,預設的埠8088訪問

http://127.0.0.1:8088/ 可以訪問hadoop管理頁面

如果沒有單獨配置hdfs-site.xml中的dfs.namenode.http-address,預設埠50070

http://127.0.0.1:50070可以訪問namenode節點資訊。

Historyserver訪問

http://127.0.0.1:19888

64位下編譯Hadoop-2.2.0過程:

參考文章:

Ubuntu12.04 64位上編譯hadoop2.2.0-你若幸福,便是晴天 -部落格頻道 - CSDN.NET

hadoop前戲配置三:hadoop2.2.0重新編譯為64位,個人測試成功

sudoapt-get install g++ autoconf automake libtool make cmake zlib1g-dev pkg-configlibssl-dev

解壓,依次執行

$./configure --prefix=/usr

$ sudomake

$ sudomake check

$ sudomake install

protoc--version

檢查一下版本

ubuntu下用apt-get安裝maven

$ sudoapt-get install maven

編譯 hadoop 2.2.0

解壓到使用者目錄 /home/rocketeer/Downloads

目前的2.2.0Source Code壓縮包解壓出來的code有個bug需要patch後才能編譯。否則編譯hadoop-auth會提示上面錯誤。

解決辦法如下:

修改下面的pom檔案。該檔案在hadoop原始碼包下尋找:

hadoop-common-project/hadoop-auth/pom.xml

開啟上面的的pom檔案,在54行加入如下的依賴:

     <dependency>

      <groupId>org.mortbay.jetty</groupId>

     <artifactId>jetty-util</artifactId>

      <scope>test</scope>

     </dependency>

     <dependency>

       <groupId>org.mortbay.jetty</groupId>

      <artifactId>jetty</artifactId>

       <scope>test</scope>

     </dependency>

進入 hadoop-2.2.0-src目錄

因為已經安裝了maven,protobuf, java環境也有了,compiler也有了所以直接執行

$ mvnpackage -Pdist,native -DskipTests -Dtar

然後重新執行編譯指令即可。編譯是一個緩慢的過程,耐心等待哦。

當看到下面的資訊時,編譯成功。

安裝配置 hadoop 2.2.0

此時編譯好的檔案位於hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0/目錄中

相關推薦

Hadoop2.2.0分散式環境搭建64編譯Hadoop-2.2.0過程

Hadoop2.2.0偽分散式環境搭建: 寫在前面:Hadoop2.2.0預設是支援32位的OS,如果想要在64位OS下執行的話,可以通過在64位OS下面編譯Hadoop2.2.0來實現,編譯的操作步驟在最後面呈現。 1: 操作:下載軟體; 檔案:Hadoop-2.2.0.

從零開始搭建大資料平臺系列之2.1—— Apache Hadoop 2.x 分散式環境搭建

JDK 版本:jdk 1.7.0_67 Apache Hadoop 版本:Hadoop 2.5.0 1、安裝目錄準備 ~]$ cd /opt/ opt]$ sudo mkdir /opt/modules opt]$ sudo chown beifeng:b

Hadoop2.x.x分散式環境搭建、測試

0、使用host-only方式 將Windows上的虛擬網絡卡改成跟Linux上的網絡卡在同一個網段 注意:一定要將Windows上的VMnet1的IP設定和你的虛擬機器在同一網段,但是IP不能相同。 1、Linux環境配置 1.1修改主機名 vim /etc/syscon

VxWorks/tornado環境搭建Win 7 64 corei5試驗成功

軟體資源下載連結http://pan.baidu.com/s/1kToygpL 1. 虛擬軟盤(RamDisk、VD等)依次裝載CD1和CD2,分別安裝,安裝CD1到60%時會卡死,只要在工作管理員殺死程序tornado.exe[我並沒有安裝CD2]         CD1

Hadoop 分散式環境搭建——hadoop2.8+centos7零基礎&完整版

引言: 環境: 一、安裝虛擬機器 在windows系統中安裝VMware14pro,直接下載安裝,無需贅述 ps:如有條件,請購買使用 二、安裝linux作業系統 CentOS 是一個基於Red Hat Linux 提供的可自由使用

Hadoop實戰1_阿里雲搭建Hadoop2.x的分散式環境

環境:阿里雲伺服器 CentOS 7 x86_64 安裝介質:jdk-7u75-linux-i586.tar.gz,hadoop-2.4.1.tar.gz 安裝jdk tar -zxvf jdk-7u75-linux-i586.tar.gz 配置

CentOS7安裝hadoop2.7.2 實現分散式 測試成功親測教程

CentOS7安裝hadoop2.7.2 實現偽分散式 測試成功(親測教程)   CentOS7安裝hadoop2.7.2 實現偽分散式 測試成功(親測教程) 經過幾天的嘗試,終於在CentOS7下安裝hadoop,實現偽分散式,並且測試成功 現在簡要的回訴一篇,以方便記憶

Hadoop2.7.3單機分散式環境搭建

Hadoop2.7.3單機偽分散式環境搭建 作者:家輝,日期:2018-07-10 CSDN部落格: http://blog.csdn.net/gobitan 說明:Hadoop測試環境經常搭建,這裡也做成一個模板並記錄下來。 基礎環境

Spark之——Hadoop2.7.3+Spark2.1.0 完全分散式環境 搭建全過程

一、修改hosts檔案在主節點,就是第一臺主機的命令列下;vim /etc/hosts我的是三臺雲主機:在原檔案的基礎上加上;ip1 master worker0 namenode ip2 worker1 datanode1 ip3 worker2 datanode2其中的i

Hadoop2.6.0分佈環境搭建

Hadoop2.6.0偽分佈環境搭建 用到的軟體: 一、安裝jdk: 1、要安裝的jdk,我把它拷在了共享資料夾裡面。 (用優盤拷也可以) 2、我把jdk拷在了使用者資料夾下面。 (其他地方也可以,不過路徑要相應改變) 3、執行復制安裝解壓命令:  解壓完畢:

Ubuntu hadoop 分散式環境搭建步驟+ssh金鑰免密碼登入配置

27408 NameNode 28218 Jps 27643 SecondaryNameNode 28066 NodeManager 27803 ResourceManager 27512 DataNode 進入下面的網頁查詢具體資料 http://192.168.1.101:50070 (HD

Hadoop單機/分散式叢集搭建新手向

此文已由作者朱笑笑授權網易雲社群釋出。 歡迎訪問網易雲社群,瞭解更多網易技術產品運營經驗。 本文主要參照官網的安裝步驟實現了Hadoop偽分散式叢集的搭建,希望能夠為初識Hadoop的小夥伴帶來借鑑意義。 環境: (1)系統環境:CentOS 7.3.1611 64位 (2)J

Hadoop分散式環境搭建之Linux作業系統安裝

Hadoop偽分散式環境搭建之Linux作業系統安裝 本篇文章是接上一篇《超詳細hadoop虛擬機器安裝教程(附圖文步驟)》,上一篇有人問怎麼沒寫hadoop安裝。在文章開頭就已經說明了,hadoop安裝會在後面寫到,因為整個系列的文章涉及到每一步的截圖,導致文章整體很長。會分別先對虛擬機器

分散式搭建YARN上執行MapReduce 程式

偽分散式的搭建(YARN上執行MapReduce 程式) 1.啟動叢集 1.1在當前目錄下 1.2確保NameNode和DataNode已經啟動 1.3啟動ResourceManager 1.4啟動NodeManager

分散式搭建啟動HDFS並執行MapReduce程式

如果前一章測試成功,那麼恭喜你,你已經可以開始新的篇章了(但是如果測試不成功,請務必搭建測試成功後再看此篇章) 偽分散式的搭建(啟動HDFS並執行MapReduce程式) 1、啟動HDFS並執行MapReduce程式 1.1配置偽分散式叢集

CDH 分散式環境搭建

 安裝環境服務部署規劃 伺服器IP 192.168.1.100 192.168.1.110 192.168.1.120 HDFS NameNode

Storm 分散式環境搭建

前提:安裝ZooKeeper     tar -zxvf apache-storm-1.0.3.tar.gz -C ~/training/     設定環境變數:vi ~/.bash_profile  

Hive on Spark 分散式環境搭建過程記錄

進入hive cli是,會有如下提示: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. s

hadoop系列之分散式環境搭建及測試驗證

Hadoop2.x 偽分散式環境搭建及測試驗證 作者:Dennis 日期:2018-08-09 前置條件: Linux 虛擬機器一臺,版本為 CentOS 7.4,假設 IP 地址為 192.168.159.181,並修改如下: 修改/etc/hostname 的

大資料環境搭建之HBase分散式環境搭建步驟詳解

文章目錄 HBase簡介 環境準備 JDK1.8以上 HBase 1.2.6 安裝模式 安裝配置 解壓安裝包 配置檔案 hbase-env.