1. 程式人生 > >Linux 安裝Hadoop 3.0操作文件~很詳細

Linux 安裝Hadoop 3.0操作文件~很詳細

今天嘗試安裝Hadoop,為接下來學習Hadoop做好準備。

一、準備環境

1.1、檢視作業系統的版本

[[email protected] ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.5 (Santiago)

1.2 關閉防火牆

[[email protected] ~]# service iptables stop
[[email protected] ~]# service ip6tables stop
[[email protected] ~]# chkconfig iptables off
[
[email protected]
~]# chkconfig ip6tables off

1.3 關閉selinux

[[email protected] ~]# sed -i 's|enforcing|disabled|' /etc/sysconfig/selinux 

二、配置JDK

  1. 解除安裝1.7版本的JDK 
[[email protected] ~]# rpm -qa|grep jdk
java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64
[[email protected] ~]# rpm -e --nodeps java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64

把下載好的jdk-8u181-linux-x64.tar.gz上傳到虛擬機器上,我這個是放在/soft目錄下了,解tar包

[[email protected] soft]# tar -xzvf jdk-8u181-linux-x64.tar.gz 

2、配置環境變數,編輯.bash_profile檔案,把JAVA_HOME,JRE_HOME,CLASSPATH,PATH路徑設定好


PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$JRE_HOME/bin

export PATH
export JAVA_HOME=/soft/jdk1.8.0_181/
export JRE_HOME=/soft/jdk1.8.0_181/jre
export CLASSPATH=$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar

編輯完成生效
[
[email protected]
~]# source .bash_profile

3、驗證當前的版本

[[email protected] ~]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

三、Hadoop下載和安裝

去http://hadoop.apache.org/  你找各種各樣的版本都可以,我這次安裝的是3.0

選擇的是二進位制下載binary download,下載完之後,生成hadoop-3.0.3.tar.gz,傳到虛擬機器/soft目錄下

1、解HADOOP tar包
[[email protected] soft]# tar -xzvf hadoop-3.0.3.tar.gz 
2、配置環境變數,編輯.bash_profile檔案
[[email protected] ~]# vi .bash_profile
PATH=$PATH:$HOME/bin:$JAVA_HOME/bin:$JRE_HOME/bin:/soft/hadoop-3.0.3/bin:/soft/hadoop-3.0.3/sbin
export HADOOP_INSTALL=/soft/hadoop-3.0.3
把解壓目錄下的bin/和sbin/目錄放到path中
3、生效,可看到HADOOP版本
[[email protected] ~]# source .bash_profile
[[email protected] ~]# hadoop version
Hadoop 3.0.3
Source code repository https://[email protected]/repos/asf/hadoop.git -r 37fd7d752db73d984dc31e0cdfd590d252f5e075
Compiled by yzhang on 2018-05-31T17:12Z
Compiled with protoc 2.5.0
From source with checksum 736cdcefa911261ad56d2d120bf1fa
This command was run using /soft/hadoop-3.0.3/share/hadoop/common/hadoop-common-3.0.3.jar
ps:在配置資訊裡面,你可以看到,我只配置了HADOOP_INSTALL和PATH路徑,而HADOOP_HOME就不配置了,因為HADOOP——INSTALL目錄下面有bin和sbin目錄,都是有可執行檔案,如果我配置,則容易造成衝突

這裡HADOOP的安裝已經安裝好了,下面就是HADOOP配置了

四、HADOOP配置

hadoop配置有三種模式:

  1. 獨立模式:獨立模式沒有守護程序,所有的程式都是執行在單獨的虛擬機器上,獨立模式適合執行MapReduce程式在開發期間,用於測試和除錯
  2. 偽分散式:Hadoop守護程序執行在本地機器上,它會模擬一個小規模的叢集
  3. 完全分散式:hadoop守護程序執行在叢集機器

獨立模式,不用配置,因為單機,沒有守護程序,預設情況下都是獨立模式,下面配置偽分散式(pseudo),首先將hadoop安裝目錄etc目錄下的hadoop目錄,重新拷貝一份,命名為Hadoop_pesudo

1 新建Hadoop_pseudo偽分散式

[[email protected] etc]# pwd
/soft/hadoop-3.0.3/etc
[[email protected] etc]# ls -l
總用量 4
drwxr-xr-x. 3 2003 2003 4096 6月   1 01:36 hadoop
[[email protected] etc]# cp -R hadoop hadoop_pseudo
[[email protected] etc]# ls -l
總用量 8
drwxr-xr-x. 3 2003 2003 4096 6月   1 01:36 hadoop
drwxr-xr-x. 3 root root 4096 8月  11 20:12 hadoop_pseudo

2 配置Hadoop_pesudo偽分散式,進入Hadoop_pseudo目錄下

需要配置core-site.xml, hdfs-site.xml, yarn-site.xml, mapreduce-site.xml等四個檔案

[[email protected] hadoop_pseudo]# vi core-site.xml 
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost/</value>
    </property>
</configuration>
設定檔案系統,fs都是預設,hdfs地址預設為本機

[[email protected] hadoop_pseudo]# vi hdfs-site.xml 
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

設定副本,因為是偽分散式,所以值是1


[[email protected] hadoop_pseudo]# vi mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
偽分散式需要設定yarn,yarn是MapReduce的框架

[[email protected] hadoop_pseudo]# vi yarn-site.xml
<configuration>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>localhost</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>
配置資源管理器的一些東西

五、配置SSH

從上面可以看出,偽分散式就是在單機配置的,偽分散式是完全分散式的一個特例,但是Hadoop沒有真正的區分偽分散式還是完全分散式,它啟動守護程序到叢集節點上的通訊是通過(定義slaves檔案)SSH通訊,所以我們只需要配置本地的ssh,登陸的時候不用輸入密碼

[[email protected] ~]# service sshd status
openssh-daemon (pid  1616) 正在執行...
[[email protected] ~]# which ssh-keygen
/usr/bin/ssh-keygen
[[email protected] ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
50:97:43:55:28:ef:a2:fd:19:2d:98:c1:03:65:1d:36 [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
|        ..=+E+.  |
|       . ++.o.   |
|      . .  +     |
|       . o  .    |
|        S +.     |
|          .=..   |
|         oo.o .  |
|        . .  +   |
|           .o    |
+-----------------+
注意上面有三處需要輸入資訊,分別是:
儲存公私鑰的資料夾位置,如果不輸入,則預設為~/.ssh/,檔名則預設是id_rsa和id_rsa.pub
使用該公私鑰時是否需要密碼,如果不輸入則表示不需要密碼
再次確認是否需要密碼
[[email protected] ~]# cd .ssh
[[email protected] .ssh]# ls -l
總用量 8
-rw-------. 1 root root 1675 8月  11 21:17 id_rsa
-rw-r--r--. 1 root root  390 8月  11 21:17 id_rsa.pub
[[email protected] .ssh]# cat id_rsa.pub >> authorized_keys
[[email protected] .ssh]# ssh localhost date  
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is c5:57:76:63:28:a3:ef:59:be:72:81:de:87:94:fa:90.
Are you sure you want to continue connecting (yes/no)? 
Host key verification failed.

失敗了
後來加了一個引數,成功了,這個時候在.ssh下面也生成了一個known_hosts檔案,然後再登陸測試的時候,就能登陸成功
[[email protected] .ssh]# ssh -o StrictHostKeyChecking=no localhost
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Sat Aug 11 21:27:24 2018 from cql

第二次
[[email protected] .ssh]# ssh localhost date
2018年 08月 11日 星期六 21:49:51 CST
我最初清理.ssh檔案把known_hosts檔案刪除了,所以第一次沒有成功,那麼失敗的原因和known_hosts有關係,第二次再次重新建立的時候,把其他檔案刪除,唯獨保留known_hosts,則一次成功

六、格式化HDFS檔案系統

[[email protected] ~]# hdfs namenode -format
WARNING: /soft/hadoop-3.0.3/logs does not exist. Creating.
2018-08-11 22:00:41,279 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = cql/192.168.10.103
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 3.0.3
STARTUP_MSG:   classpath = /soft/hadoop-3.0.3/etc/hadoop:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jersey-core-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/avro-1.7.7.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-core-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/curator-framework-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jersey-json-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/json-smart-2.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/httpclient-4.5.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/hadoop-annotations-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-server-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-webapp-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/httpcore-4.4.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-server-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/re2j-1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-core-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/snappy-java-1.0.5.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/asm-5.0.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-client-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/token-provider-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/netty-3.10.5.Final.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jsch-0.1.54.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-common-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-util-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-lang3-3.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-io-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/metrics-core-3.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/stax2-api-3.1.4.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jersey-server-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/accessors-smart-1.2.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/zookeeper-3.4.9.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/hadoop-auth-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/curator-client-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jersey-servlet-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/commons-net-3.6.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerby-config-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/kerb-util-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/soft/hadoop-3.0.3/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/common/hadoop-common-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/common/hadoop-common-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/hadoop-nfs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/common/hadoop-kms-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-xml-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-util-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/avro-1.7.7.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/json-smart-2.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/hadoop-annotations-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/xz-1.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-server-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-webapp-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jettison-1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/re2j-1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/asm-5.0.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/paranamer-2.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/okio-1.6.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/gson-2.2.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-io-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-security-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/zookeeper-3.4.9.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/hadoop-auth-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/commons-net-3.6.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-servlet-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jetty-http-9.3.19.v20170502.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-client-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-native-client-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-native-client-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-nfs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-client-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/hdfs/hadoop-hdfs-rbf-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.3-tests.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/objenesis-1.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jersey-client-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/fst-2.50.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/json-io-2.5.1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/java-util-1.9.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/metrics-core-3.0.1.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/lib/guice-4.0.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-common-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-tests-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-api-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-router-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-common-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-client-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-registry-3.0.3.jar:/soft/hadoop-3.0.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.0.3.jar
STARTUP_MSG:   build = https://[email protected]/repos/asf/hadoop.git -r 37fd7d752db73d984dc31e0cdfd590d252f5e075; compiled by 'yzhang' on 2018-05-31T17:12Z
STARTUP_MSG:   java = 1.8.0_181
************************************************************/
2018-08-11 22:00:41,323 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2018-08-11 22:00:41,348 INFO namenode.NameNode: createNameNode [-format]
2018-08-11 22:00:42,027 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-69157c14-18fc-49f1-85cc-8cd03b861716
2018-08-11 22:00:43,266 INFO namenode.FSEditLog: Edit logging is async:true
2018-08-11 22:00:43,311 INFO namenode.FSNamesystem: KeyProvider: null
2018-08-11 22:00:43,312 INFO namenode.FSNamesystem: fsLock is fair: true
2018-08-11 22:00:43,315 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
2018-08-11 22:00:43,336 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
2018-08-11 22:00:43,336 INFO namenode.FSNamesystem: supergroup          = supergroup
2018-08-11 22:00:43,337 INFO namenode.FSNamesystem: isPermissionEnabled = true
2018-08-11 22:00:43,337 INFO namenode.FSNamesystem: HA Enabled: false
2018-08-11 22:00:43,414 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2018-08-11 22:00:43,441 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2018-08-11 22:00:43,441 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2018-08-11 22:00:43,457 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2018-08-11 22:00:43,461 INFO blockmanagement.BlockManager: The block deletion will start around 2018 八月 11 22:00:43
2018-08-11 22:00:43,467 INFO util.GSet: Computing capacity for map BlocksMap
2018-08-11 22:00:43,467 INFO util.GSet: VM type       = 64-bit
2018-08-11 22:00:43,474 INFO util.GSet: 2.0% max memory 450.5 MB = 9.0 MB
2018-08-11 22:00:43,474 INFO util.GSet: capacity      = 2^20 = 1048576 entries
2018-08-11 22:00:43,525 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
2018-08-11 22:00:43,533 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
2018-08-11 22:00:43,533 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: defaultReplication         = 3
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: maxReplication             = 512
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: minReplication             = 1
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
2018-08-11 22:00:43,534 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
2018-08-11 22:00:43,535 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2018-08-11 22:00:43,698 INFO util.GSet: Computing capacity for map INodeMap
2018-08-11 22:00:43,698 INFO util.GSet: VM type       = 64-bit
2018-08-11 22:00:43,699 INFO util.GSet: 1.0% max memory 450.5 MB = 4.5 MB
2018-08-11 22:00:43,699 INFO util.GSet: capacity      = 2^19 = 524288 entries
2018-08-11 22:00:43,700 INFO namenode.FSDirectory: ACLs enabled? false
2018-08-11 22:00:43,700 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
2018-08-11 22:00:43,700 INFO namenode.FSDirectory: XAttrs enabled? true
2018-08-11 22:00:43,700 INFO namenode.NameNode: Caching file names occurring more than 10 times
2018-08-11 22:00:43,712 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true
2018-08-11 22:00:43,735 INFO util.GSet: Computing capacity for map cachedBlocks
2018-08-11 22:00:43,735 INFO util.GSet: VM type       = 64-bit
2018-08-11 22:00:43,735 INFO util.GSet: 0.25% max memory 450.5 MB = 1.1 MB
2018-08-11 22:00:43,735 INFO util.GSet: capacity      = 2^17 = 131072 entries
2018-08-11 22:00:43,752 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2018-08-11 22:00:43,752 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2018-08-11 22:00:43,752 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2018-08-11 22:00:43,771 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
2018-08-11 22:00:43,771 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2018-08-11 22:00:43,776 INFO util.GSet: Computing capacity for map NameNodeRetryCache
2018-08-11 22:00:43,776 INFO util.GSet: VM type       = 64-bit
2018-08-11 22:00:43,778 INFO util.GSet: 0.029999999329447746% max memory 450.5 MB = 138.4 KB
2018-08-11 22:00:43,778 INFO util.GSet: capacity      = 2^14 = 16384 entries
2018-08-11 22:00:43,855 INFO namenode.FSImage: Allocated new BlockPoolId: BP-753739993-192.168.10.103-1533996043842
2018-08-11 22:00:43,874 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
2018-08-11 22:00:43,903 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
2018-08-11 22:00:44,058 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 389 bytes saved in 0 seconds .
2018-08-11 22:00:44,090 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
2018-08-11 22:00:44,109 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at cql/192.168.10.103
************************************************************/
[[email protected] ~]# 

七、啟動和停止守護程序

1、啟動hdfs

[[email protected] ~]# start-dfs.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting namenodes on [localhost]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [cql]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
2018-08-11 22:07:50,293 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

啟動出錯,根據報錯是HDS_NAMENODE_USER沒有定義,則編輯sbin/下的start-dfs.sh和stop-dfs.sh,在頭部加上
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

[[email protected] hadoop-3.0.3]# vi sbin/start-dfs.sh
# limitations under the License.
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

# Start hadoop dfs daemons.

這個時候,再次啟動
[[email protected] ~]# start-dfs.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting namenodes on [localhost]
localhost: ERROR: JAVA_HOME is not set and could not be found.
Starting datanodes
localhost: ERROR: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [cql]
cql: ERROR: JAVA_HOME is not set and could not be found.
2018-08-11 22:19:05,869 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
發現報錯,JAVA_HOME沒有設定,但是這個肯定是在環境變數中設定了,到底是怎麼回事,百度了一下,Hadoop有個配置檔案,中有JAVA_HOME,則修改這個檔案,首先找到這個檔案,我這邊是通過偽分散式啟動的,則我需要修改pseudo下面的檔案
[[email protected] hadoop-3.0.3]# find /soft/hadoop-3.0.3 -name hadoop-env.sh
/soft/hadoop-3.0.3/etc/hadoop_pseudo/hadoop-env.sh
/soft/hadoop-3.0.3/etc/hadoop/hadoop-env.sh
[[email protected] hadoop-3.0.3]# vi etc/hadoop_pseudo/hadoop-env.sh
JAVA_HOME=/soft/jdk1.8.0_181
這是我自己的JAVA_HOME

再次啟動
[[email protected] ~]# start-dfs.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [cql]
2018-08-11 22:27:28,031 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
發現好像還是有點問題,但是這個可以忽略,畢竟各個節點都已經起來了,但是我是個追求完美的人,還是要解決的,唉,真是困難多多,堅持不要放棄
a、關於2018-08-11 22:27:28,031 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
的解決辦法是在/soft/hadoop-3.0.3/etc/hadoop_pseudo偽分散式下面的log4j.properties新增
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
詳情解決辦法:https://blog.csdn.net/l1028386804/article/details/51538611 可看這篇文章

這次終於起來了
[[email protected] ~]# start-dfs.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [cql]
太不容易了

補充,我前面修改start-dfs.sh和stop-dfs.sh時,根據所搜的答案加的條件,最初是HADOOP_SECURE_DN_USER=hdfs 後來發現還是有警告,
WARNING: HADOOP_SECURE_DN_USER has been replaced by HDFS_DATANODE_SECURE_USER. Using value of HADOOP_SECURE_DN_USER.
再次查詢原因,把HADOOP_SECURE_DN_USER=hdfs 替換成HDFS_DATANODE_SECURE_USER=hdfs,這次終於沒錯了,所以第一次搭建環境,一定要仔細,仔細,再仔細,這些都是血淋淋的教訓吶~


2、啟動yarn

[[email protected] ~]# start-yarn.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo   
Starting resourcemanager
ERROR: Attempting to operate on yarn resourcemanager as root
ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
Starting nodemanagers
ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.
第一次啟動報錯,根據提示資訊,有了上次的經驗,這次編輯相應的檔案sbin/start-yarn.sh和stop-yarn.sh,加入相應配置
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

再次啟動
[[email protected] ~]# start-yarn.sh --config $HADOOP_INSTALL/etc/hadoop_pseudo
Starting resourcemanager
Starting nodemanagers
沒有問題了,繼續

3、啟動Mapreduce的守護程序

mr-jobhistory-daemon.sh start historyserver 現在這個測試環境,可啟動也可不啟動,都可以

[[email protected] ~]#  mr-jobhistory-daemon.sh start historyserver   
WARNING: Use of this script to start the MR JobHistory daemon is deprecated.
WARNING: Attempting to execute replacement "mapred --daemon start" instead.
[[email protected] ~]# jps
11232 NameNode
11572 SecondaryNameNode
12836 Jps
12167 NodeManager
12045 ResourceManager
12766 JobHistoryServer   --現在已經啟動了
11359 DataNode
詳情可使用
[[email protected] ~]# jps -l
11232 org.apache.hadoop.hdfs.server.namenode.NameNode
12995 org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
11572 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
12167 org.apache.hadoop.yarn.server.nodemanager.NodeManager
13049 sun.tools.jps.Jps
12045 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
11359 org.apache.hadoop.hdfs.server.datanode.DataNode

4、可在網頁檢視

補充:

--安裝版本不一樣,所以埠號也不一樣,之前2.7的版本,namenode的埠號是50070,而3.0的是9870,這個要看自己安裝的設定 ,如果你是在不知道這個埠號到底在哪看,那麼也可以根據程序找相應的埠號

[[email protected] ~]# jps
11232 NameNode
12995 JobHistoryServer
11572 SecondaryNameNode
13831 Jps
12167 NodeManager
12045 ResourceManager
11359 DataNode
[[email protected] ~]# 
[[email protected] ~]# netstat -nap |grep 11232
tcp        0      0 0.0.0.0:9870                0.0.0.0:*                   LISTEN      11232/java          
tcp        0      0 127.0.0.1:8020              0.0.0.0:*                   LISTEN      11232/java          
tcp        0      0 127.0.0.1:8020              127.0.0.1:10002             ESTABLISHED 11232/java          
unix  2      [ ]         STREAM     CONNECTED     60145  11232/java          
unix  2      [ ]         STREAM     CONNECTED     60130  11232/java 

檢視JOB

 

最後建立使用者目錄

[[email protected] ~]# export HADOOP_CONF_DIR=/soft/hadoop-3.0.3/etc/hadoop_pseudo
[[email protected] ~]# hadoop fs -ls /
[[email protected] ~]# hadoop fs -mkdir /user/
[[email protected] ~]# hadoop fs -ls /        
Found 1 items
drwxr-xr-x   - root supergroup          0 2018-08-12 00:12 /user