1,這個我們是直接在linux中下載hadoop-2.6.0-cdh5.7.0,(當然你也可以在本地下載後再上傳,這步就可以忽略)首先確保你的虛擬機器有網路,可以先ping百度測試有網沒,如下程式碼就是有網路的情況。

[[email protected] ~]# ping www.baidu.com
PING www.a.shifen.com (180.97.33.108) 56(84) bytes of data.
64 bytes from 180.97.33.108: icmp_seq=1 ttl=128 time=26.4 ms
64 bytes from 180.97.33.108: icmp_seq=1 ttl=127 time=26.4 ms (DUP!)
64 bytes from 180.97.33.108: icmp_seq=2 ttl=128 time=23.7 ms
64 bytes from 180.97.33.108: icmp_seq=2 ttl=127 time=23.7 ms (DUP!)
64 bytes from 180.97.33.108: icmp_seq=3 ttl=128 time=23.1 ms
64 bytes from 180.97.33.108: icmp_seq=3 ttl=127 time=23.5 ms (DUP!)
64 bytes from 180.97.33.108: icmp_seq=4 ttl=128 time=24.7 ms

2,root賬戶下,進入opt這個資料夾:

[[email protected] ~]# cd /opt
[[email protected] opt]#
[[email protected] opt]# wget http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0.tar.gz
[[email protected] opt]# ls
hadoop-2.6.0-cdh5.7.0.tar.gz
[[email protected] opt]# 

3,解壓hadoop,解壓完成後就可以ls檢視;

[[email protected] opt]# tar -zxvf hadoop-2.6.0-cdh5.7.0.tar.gz
[[email protected] opt]# ls
hadoop-2.6.0-cdh5.7.0  hadoop-2.6.0-cdh5.7.0.tar.gz
[[email protected] opt]#

4,改名字,由於後面要配置環境變數,為了方便,不用輸入hadoop-2.6.0-cdh5.7.0這麼長串的字元,所以我們將hadoop-2.6.0-cdh5.7.0直接改為hadoop,再ls檢視:

[[email protected] opt]# mv hadoop-2.6.0-cdh5.7.0 hadoop
[[email protected] opt]# ls
hadoop  hadoop-2.6.0-cdh5.7.0.tar.gz
[[email protected] opt]# 

5,配置java環境變數,我已經提前上傳了jdk,也是移動到 /opt目錄下,
然後再解壓,首先找到你的jdk的位置,再移動到 /opt目錄下,然後再編輯環境變數,在末尾新增以下內容
在這裡插入圖片描述
然後再重新整理全域性變數。
一系列命令如下:

[[email protected] hadoop]# mv jdk-8u45-linux-x64.gz /opt/
[[email protected] hadoop]# cd /opt/
[[email protected] opt]# tar -zxvf jdk-8u45-linux-x64.gz
[[email protected] opt]# mv jdk1.8.0_45/ jdk
[email protected] opt]# ls
hadoop  hadoop-2.6.0-cdh5.7.0.tar.gz  jdk  jdk-8u45-linux-x64.gz  rh
[[email protected] opt]# vi /etc/profile   //編輯環境變數,進入後在末尾新增上圖中的內容。
[[email protected] opt]# source /etc/profile      //重新整理全域性變數

6,檢查環境是否生效,java,hadoop是否成功安裝,輸入java -version和hadooo version檢視java和hadoop版本,如下:

[[email protected] opt]# java -version
java version "1.7.0_45"
OpenJDK Runtime Environment (rhel-2.4.3.3.el6-x86_64 u45-b15)
OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)
[[email protected] opt]# 
[[email protected] opt]# hadoop version
Hadoop 2.6.0-cdh5.7.0
Subversion http://github.com/cloudera/hadoop -r c00978c67b0d3fe9f3b896b5030741bd40bf541a
Compiled by jenkins on 2016-03-23T18:41Z
Compiled with protoc 2.5.0
From source with checksum b2eabfa328e763c88cb14168f9b372
This command was run using /opt/hadoop/share/hadoop/common/hadoop-common-2.6.0-cdh5.7.0.jar
[[email protected] opt]# 

7,賦予hadoop使用者編輯opt檔案的許可權,輸入以下命令"chown -R hadoop /opt/
",然後再ll可以檢視當前的許可權都是hadoop使用者的:

[[email protected] opt]# chown -R hadoop /opt/ 
[[email protected] opt]# ll
total 473512
drwxr-xr-x. 14 hadoop 4001      4096 Mar 24  2016 hadoop
-rw-r--r--.  1 hadoop root 311585484 Apr  1  2016 hadoop-2.6.0-cdh5.7.0.tar.gz
drwxr-xr-x.  8 hadoop  143      4096 Apr 11  2015 jdk
-rw-rw-r--.  1 hadoop ke   173271626 Sep 11 15:37 jdk-8u45-linux-x64.gz
drwxr-xr-x.  2 hadoop root      4096 Nov 22  2013 rh
[[email protected] opt]# 

8,切換hadoop使用者,配置Hadoop:

[[email protected] opt]# su hadoop
[[email protected] hadoop]$ cd /opt/hadoop/etc/hadoop
[[email protected] hadoop]$ ls
capacity-scheduler.xml      httpfs-env.sh            mapred-env.sh
configuration.xsl           httpfs-log4j.properties  mapred-queues.xml.template
container-executor.cfg      httpfs-signature.secret  mapred-site.xml.template
core-site.xml               httpfs-site.xml          slaves
hadoop-env.cmd              kms-acls.xml             ssl-client.xml.example
hadoop-env.sh               kms-env.sh               ssl-server.xml.example
hadoop-metrics2.properties  kms-log4j.properties     yarn-env.cmd
hadoop-metrics.properties   kms-site.xml             yarn-env.sh
hadoop-policy.xml           log4j.properties         yarn-site.xml
hdfs-site.xml               mapred-env.cmd
[[email protected] hadoop]$

這裡我們就可以看到裡面有我們需要配置的檔案了,
(1)首先編輯“hadoop-env.sh”,“mapred-env.sh”,“yarn-env.sh”,這三個資料夾,
找到這三個檔案裡面的JAVA_HOME,將後面的都改為自己jdk的位置,如圖:
在這裡插入圖片描述
改為如下內容:
在這裡插入圖片描述
如果export前面有#,一定要去掉,
(2)然後修改core-site.xml檔案,新增內容如下:

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
</configuration>

(3)修改hdfs-site.xml,新增內容如下:

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.name.dir</name>
                <value>/opt/hdfs/name</value>
        </property>
        <property>
                <name>dfs.data.dir</name>
                <value>/opt/hdfs/data</value>
        </property>
</configuration>

(4)修改mapred-site.xml檔案,由於不存在這個檔案,我們把mapred-site.xml.template檔案拷貝一份給mapred-site.xml,程式碼如下;

[[email protected] hadoop]$ cp mapred-site.xml.template mapred-site.xml

然後再新增以下內容:

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
</configuration>

(5)修改yarn-site.xml檔案,新增內容如下:

<configuration>

<!-- Site specific YARN configuration properties -->
	<property>
		<name>yarn.resourcemanager.address</name>
		<value>master:8080</value>
	</property>
	<property>
	         <name>yarn.resoucemanager.resouce-tracker.address</name>
	         <value>master:8082</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
		<value>org.apache.hadoop.mapred.ShuffleHandler</value>
	</property>
</configuration>

(6)修改slaves檔案,修改主機名:

master

9,格式化HDFS,進入/opt/hadoop/bin目錄下,再輸入hadoop namenode -format:

[email protected] bin]$ cd /opt/hadoop/bin
[[email protected] bin]$ hadoop namenode -format

如果出現以下內容就說明格式化成功了

18/09/29 12:30:07 INFO common.Storage: Storage directory /opt/hdfs/name has been successfully formatted.
18/09/29 12:30:08 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/09/29 12:30:08 INFO util.ExitUtil: Exiting with status 0
18/09/29 12:30:08 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.137.114
************************************************************/

如果格式化沒成功,去檢查是不是上面的配置檔案的內容打錯(基本都是有問題,一定要仔細看,我開始練習安裝也是錯了好多次),是不是對應自己的主機名master,或者看hadoop環境變數是否生效。還有二次格式化就要先把/opt/hdfs目錄下的data和name和tmp刪除了再去格式化。不然是不會成功的。
10,啟動hadoop並驗證安裝成功,
首先賦予指令碼(hadoop使用者下)輸入以下命令:

[[email protected] bin]$ chmod +x -R /opt/hadoop/sbin/

然後啟動hadoop,(hadoop使用者下)進入/opt/hadoop/sbin執行以下命令:

[[email protected] bin]$ cd /opt/hadoop/sbin/
[[email protected] sbin]$ ./start-all.sh 

完成後執行jps命令,檢視相關程序是否啟動,如圖就是啟動成功:

[[email protected] sbin]$ jps
3360 NodeManager
3268 ResourceManager
3671 Jps
2857 NameNode
3132 SecondaryNameNode
2975 DataNode
[[email protected] sbin]$ 

到此hadoop偽分散式就成功搭建,最後執行./stop-all.sh命令停止服務,以後每次啟動就到hadoop使用者下的/opt/hadoop/sbin下啟動,停止服務命令如下:

[[email protected] sbin]$ ./stop-all.sh