1. 程式人生 > >hdfs叢集的搭建

hdfs叢集的搭建

轉自: https://blog.csdn.net/wanbf123/article/details/81948026

 

版權宣告:everything    https://blog.csdn.net/wanbf123/article/details/81948026
wget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz

ssh-keygen -t rsa
 
# cat id_rsa.pub>> authorized_keys
# ssh

[email protected] cat ~/.ssh/id_rsa.pub>> authorized_keys
# ssh [email protected] cat ~/.ssh/id_rsa.pub>> authorized_keys
 
# cat id_rsa.pub>> authorized_keys
# ssh [email protected] cat ~/.ssh/id_rsa.pub>> authorized_keys
# ssh [email protected] cat ~/.ssh/id_rsa.pub>> authorized_keys
 
# scp authorized_keys
[email protected]
:/root/.ssh/
# scp authorized_keys [email protected]:/root/.ssh/
# scp known_hosts [email protected]:/root/.ssh/
# scp known_hosts [email protected]:/root/.ssh/
[[email protected] localfiles]# hdfs dfs -mkdir /user
[[email protected] localfiles]# hdfs dfs -mkdir /user/root
[
[email protected]
localfiles]# hdfs dfs -mkdir /user/root/input
[[email protected] localfiles]# hdfs dfs -mkdir /user/root/output
[[email protected] localfiles]# hdfs dfs -put ./testHdfs.txt /user/root/input
 

Hadoop HA 原理概述

為什麼會有 hadoop HA 機制呢?
  HA:High Available,高可用

  在Hadoop 2.0之前,在HDFS 叢集中NameNode 存在單點故障 (SPOF:A Single Point of Failure)。 對於只有一個 NameNode 的叢集,如果 NameNode 機器出現故障(比如宕機或是軟體、硬體 升級),那麼整個叢集將無法使用,直到 NameNode 重新啟動


那如何解決呢?
  HDFS 的 HA 功能通過配置 Active/Standby 兩個 NameNodes 實現在叢集中對 NameNode 的 熱備來解決上述問題。如果出現故障,如機器崩潰或機器需要升級維護,這時可通過此種方 式將 NameNode 很快的切換到另外一臺機器。

  在一個典型的 HDFS(HA) 叢集中,使用兩臺單獨的機器配置為 NameNodes 。在任何時間點, 確保 NameNodes 中只有一個處於 Active 狀態,其他的處在 Standby 狀態。其中 ActiveNameNode 負責叢集中的所有客戶端操作,StandbyNameNode 僅僅充當備機,保證一 旦 ActiveNameNode 出現問題能夠快速切換。

  為了能夠實時同步 Active 和 Standby 兩個 NameNode 的元資料資訊(實際上 editlog),需提 供一個共享儲存系統,可以是 NFS、QJM(Quorum Journal Manager)或者 Zookeeper,Active Namenode 將資料寫入共享儲存系統,而 Standby 監聽該系統,一旦發現有新資料寫入,則 讀取這些資料,並載入到自己記憶體中,以保證自己記憶體狀態與 Active NameNode 保持基本一 致,如此這般,在緊急情況下 standby 便可快速切為 active namenode。為了實現快速切換, Standby 節點獲取叢集的最新檔案塊資訊也是很有必要的。為了實現這一目標,DataNode 需 要配置 NameNodes 的位置,並同時給他們傳送檔案塊資訊以及心跳檢測。

 

叢集規劃
  描述:hadoop HA 叢集的搭建依賴於 zookeeper,所以選取三臺當做 zookeeper 叢集 ,總共準備了四臺主機,分別是 hadoop1,hadoop2,hadoop3,hadoop4 其中 hadoop1 和 hadoop2 做 namenode 的主備切換,hadoop3 和 hadoop4 做 resourcemanager 的主備切換

四臺機器

 

叢集伺服器準備
1、 修改主機名

2、 修改 IP 地址

3、 新增主機名和 IP 對映

4、 新增普通使用者 hadoop 使用者並配置 sudoer 許可權

5、 設定系統啟動級別

6、 關閉防火牆/關閉 Selinux

7、 安裝 JDK 兩種準備方式:

  1、 每個節點都單獨設定,這樣比較麻煩。線上環境可以編寫指令碼實現

  2、 虛擬機器環境可是在做完以上 7 步之後,就進行克隆

  3、 然後接著再給你的叢集配置 SSH 免密登陸和搭建時間同步服務

8、 配置 SSH 免密登入

9、 同步伺服器時間

叢集安裝

1、安裝 Zookeeper 叢集
具體安裝步驟參考之前的文件http://www.cnblogs.com/qingyunzong/p/8619184.html


2、安裝 hadoop 叢集
(1)獲取安裝包

  從官網或是映象站下載

  http://hadoop.apache.org/

  http://mirrors.hust.edu.cn/apache/

(2)上傳解壓縮

[[email protected] ~]$ ls
apps  hadoop-2.7.5-centos-6.7.tar.gz  movie2.jar  users.dat                zookeeper.out
data  log                             output2     zookeeper-3.4.10.tar.gz
[[email protected] ~]$ tar -zxvf hadoop-2.7.5-centos-6.7.tar.gz -C apps/
(3)修改配置檔案

  配置檔案目錄:/home/hadoop/apps/hadoop-2.7.5/etc/hadoop

  修改 hadoop-env.sh檔案

[[email protected] ~]$ cd apps/hadoop-2.7.5/etc/hadoop/
[[email protected] hadoop]$ echo $JAVA_HOME
/usr/local/jdk1.8.0_73
[[email protected] hadoop]$ vi hadoop-env.sh 


  修改core-site.xml

[[email protected] hadoop]$ vi core-site.xml
<configuration>
    <!-- 指定hdfs的nameservice為myha01 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://myha01/</value>
    </property>
 
    <!-- 指定hadoop臨時目錄 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/data/hadoopdata/</value>
    </property>
 
    <!-- 指定zookeeper地址 -->
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>hadoop1:2181,hadoop2:2181,hadoop3:2181,hadoop4:2181</value>
    </property>
 
    <!-- hadoop連結zookeeper的超時時長設定 -->
    <property>
        <name>ha.zookeeper.session-timeout.ms</name>
        <value>1000</value>
        <description>ms</description>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
        <description>
                If "true", enable permission checking in HDFS.
                If "false", permission checking is turned off
        </description>
    </property>
</configuration>

 

  修改hdfs-site.xml

[[email protected] hadoop]$ vi hdfs-site.xml 
 

  
<configuration>
 
    <!-- 指定副本數 -->
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
 
    <!-- 配置namenode和datanode的工作目錄-資料儲存目錄 -->
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/home/hadoop/data/hadoopdata/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/home/hadoop/data/hadoopdata/dfs/data</value>
    </property>
 
    <!-- 啟用webhdfs -->
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
 
    <!--指定hdfs的nameservice為myha01,需要和core-site.xml中的保持一致 
                 dfs.ha.namenodes.[nameservice id]為在nameservice中的每一個NameNode設定唯一標示符。 
        配置一個逗號分隔的NameNode ID列表。這將是被DataNode識別為所有的NameNode。 
        例如,如果使用"myha01"作為nameservice ID,並且使用"nn1"和"nn2"作為NameNodes標示符 
    -->
    <property>
        <name>dfs.nameservices</name>
        <value>myha01</value>
    </property>
 
    <!-- myha01下面有兩個NameNode,分別是nn1,nn2 -->
    <property>
        <name>dfs.ha.namenodes.myha01</name>
        <value>nn1,nn2</value>
    </property>
 
    <!-- nn1的RPC通訊地址 -->
    <property>
        <name>dfs.namenode.rpc-address.myha01.nn1</name>
        <value>hadoop1:9000</value>
    </property>
 
    <!-- nn1的http通訊地址 -->
    <property>
        <name>dfs.namenode.http-address.myha01.nn1</name>
        <value>hadoop1:50070</value>
    </property>
 
    <!-- nn2的RPC通訊地址 -->
    <property>
        <name>dfs.namenode.rpc-address.myha01.nn2</name>
        <value>hadoop2:9000</value>
    </property>
 
    <!-- nn2的http通訊地址 -->
    <property>
        <name>dfs.namenode.http-address.myha01.nn2</name>
        <value>hadoop2:50070</value>
    </property>
 
    <!-- 指定NameNode的edits元資料的共享儲存位置。也就是JournalNode列表 
                 該url的配置格式:qjournal://host1:port1;host2:port2;host3:port3/journalId 
        journalId推薦使用nameservice,預設埠號是:8485 -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop1:8485;hadoop2:8485;hadoop3:8485/myha01</value>
    </property>
 
    <!-- 指定JournalNode在本地磁碟存放資料的位置 -->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/hadoop/data/journaldata</value>
    </property>
 
    <!-- 開啟NameNode失敗自動切換 -->
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
 
    <!-- 配置失敗自動切換實現方式 -->
    <property>
        <name>dfs.client.failover.proxy.provider.myha01</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
 
    <!-- 配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
            sshfence
            shell(/bin/true)
        </value>
    </property>
 
    <!-- 使用sshfence隔離機制時需要ssh免登陸 -->
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
 
    <!-- 配置sshfence隔離機制超時時間 -->
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>
 
    <property>
        <name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
        <value>60000</value>
    </property>
</configuration>
  修改mapred-site.xml 

[[email protected] hadoop]$ cp mapred-site.xml.template mapred-site.xml
[[email protected] hadoop]$ vi mapred-site.xml
 
<configuration>
    <!-- 指定mr框架為yarn方式 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    
    <!-- 指定mapreduce jobhistory地址 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop1:10020</value>
    </property>
    
    <!-- 任務歷史伺服器的web地址 -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop1:19888</value>
    </property>
</configuration>
 

  修改yarn-site.xml 

[[email protected] hadoop]$ vi yarn-site.xml 
 

<configuration>
    <!-- 開啟RM高可用 -->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
 
    <!-- 指定RM的cluster id -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yrc</value>
    </property>
 
    <!-- 指定RM的名字 -->
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
 
    <!-- 分別指定RM的地址 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hadoop3</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>hadoop4</value>
    </property>
 
    <!-- 指定zk叢集地址 -->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
    </property>
 
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
 
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
 
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>86400</value>
    </property>
 
    <!-- 啟用自動恢復 -->
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
 
    <!-- 制定resourcemanager的狀態資訊儲存在zookeeper叢集上 -->
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
</configuration>

 

   修改slaves

[[email protected] hadoop]$ vi slaves 
hadoop1
hadoop2
hadoop3
hadoop4
 (4)將hadoop安裝包分發到其他叢集節點

重點強調: 每臺伺服器中的hadoop安裝包的目錄必須一致, 安裝包的配置資訊還必須保持一致
重點強調: 每臺伺服器中的hadoop安裝包的目錄必須一致, 安裝包的配置資訊還必須保持一致
重點強調: 每臺伺服器中的hadoop安裝包的目錄必須一致, 安裝包的配置資訊還必須保持一致

[[email protected] apps]$ scp -r hadoop-2.7.5/ hadoop2:$PWD
[[email protected] apps]$ scp -r hadoop-2.7.5/ hadoop3:$PWD
[[email protected] apps]$ scp -r hadoop-2.7.5/ hadoop4:$PWD
 (5)配置Hadoop環境變數

千萬注意:

1、如果你使用root使用者進行安裝。 vi /etc/profile 即可 系統變數

2、如果你使用普通使用者進行安裝。 vi ~/.bashrc 使用者變數

本人是用的hadoop使用者安裝的

[[email protected] ~]$ vi .bashrc
export HADOOP_HOME=/home/hadoop/apps/hadoop-2.7.5
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:
 

  使環境變數生效

[[email protected] bin]$ source ~/.bashrc 
 (6)檢視hadoop版本

 

[[email protected] ~]$ hadoop version
Hadoop 2.7.5
Subversion Unknown -r Unknown
Compiled by root on 2017-12-24T05:30Z
Compiled with protoc 2.5.0
From source with checksum 9f118f95f47043332d51891e37f736e9
This command was run using /home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/hadoop-common-2.7.5.jar
[[email protected] ~]$ 
 

 

Hadoop HA叢集的初始化
重點強調:一定要按照以下步驟逐步進行操作

重點強調:一定要按照以下步驟逐步進行操作

重點強調:一定要按照以下步驟逐步進行操作


1、啟動ZooKeeper
  啟動4臺伺服器上的zookeeper服務

  hadoop1

[[email protected] conf]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[[email protected] conf]$ jps
2674 Jps
2647 QuorumPeerMain
[[email protected] conf]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[[email protected] conf]$ 
  hadoop2

[[email protected] conf]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[[email protected] conf]$ jps
2592 QuorumPeerMain
2619 Jps
[[email protected] conf]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[[email protected] conf]$ 
     hadoop3

[[email protected] conf]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[[email protected] conf]$ jps
16612 QuorumPeerMain
16647 Jps
[[email protected] conf]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader
[[email protected] conf]$ 
  hadoop4

[[email protected] conf]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[[email protected] conf]$ jps
3596 Jps
3567 QuorumPeerMain
[[email protected] conf]$ zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apps/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: observer
[[email protected] conf]$ 

2、在你配置的各個journalnode節點啟動該程序
  按照之前的規劃,我的是在hadoop1、hadoop2、hadoop3上進行啟動,啟動命令如下

  hadoop1

[[email protected] conf]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop1.out
[[email protected] conf]$ jps
2739 JournalNode
2788 Jps
2647 QuorumPeerMain
[[email protected] conf]$ 
  hadoop2

[[email protected] conf]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop2.out
[[email protected] conf]$ jps
2592 QuorumPeerMain
3049 JournalNode
3102 Jps
[[email protected] conf]$ 
  hadoop3

[[email protected] conf]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-journalnode-hadoop3.out
[[email protected] conf]$ jps
16612 QuorumPeerMain
16712 JournalNode
16766 Jps
[[email protected] conf]$ 

3、格式化namenode
  先選取一個namenode(hadoop1)節點進行格式化

[[email protected] ~]$ hadoop namenode -format

4、要把在hadoop1節點上生成的元資料 給複製到 另一個namenode(hadoop2)節點上
[[email protected] ~]$ cd data/
[[email protected] data]$ ls
hadoopdata journaldata zkdata
[[email protected] data]$ scp -r hadoopdata/ hadoop2:$PWD
VERSION 100% 206 0.2KB/s 00:00 
fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00 
fsimage_0000000000000000000 100% 323 0.3KB/s 00:00 
seen_txid 100% 2 0.0KB/s 00:00 
[[email protected] data]$

 


5、格式化zkfc
重點強調:只能在nameonde節點進行

重點強調:只能在nameonde節點進行

重點強調:只能在nameonde節點進行

[[email protected] data]$ hdfs zkfc -formatZK


 

啟動叢集

1、啟動HDFS
  可以從啟動輸出日誌裡面看到啟動了哪些程序

 

[[email protected] ~]$ start-dfs.sh
Starting namenodes on [hadoop1 hadoop2]
hadoop2: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop2.out
hadoop1: starting namenode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-namenode-hadoop1.out
hadoop3: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop3.out
hadoop4: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop4.out
hadoop2: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop2.out
hadoop1: starting datanode, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-datanode-hadoop1.out
Starting journal nodes [hadoop1 hadoop2 hadoop3]
hadoop3: journalnode running as process 16712. Stop it first.
hadoop2: journalnode running as process 3049. Stop it first.
hadoop1: journalnode running as process 2739. Stop it first.
Starting ZK Failover Controllers on NN hosts [hadoop1 hadoop2]
hadoop2: starting zkfc, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-zkfc-hadoop2.out
hadoop1: starting zkfc, logging to /home/hadoop/apps/hadoop-2.7.5/logs/hadoop-hadoop-zkfc-hadoop1.out
[[email protected] ~]$ 
  檢視各節點程序是否正常

  hadoop1

  hadoop2

 

  hadoop3

 

  hadoop4

  


2、啟動YARN
  在主備 resourcemanager 中隨便選擇一臺進行啟動

[[email protected] ~]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-resourcemanager-hadoop4.out
hadoop3: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-nodemanager-hadoop3.out
hadoop2: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-nodemanager-hadoop2.out
hadoop4: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-nodemanager-hadoop4.out
hadoop1: starting nodemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-nodemanager-hadoop1.out
[[email protected] ~]$ 
 

  正常啟動之後,檢查各節點的程序

hadoop1

hadoop2

hadoop3

hadoop4

 

若備用節點的 resourcemanager 沒有啟動起來,則手動啟動起來,在hadoop3上進行手動啟動

[[email protected] ~]$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/hadoop/apps/hadoop-2.7.5/logs/yarn-hadoop-resourcemanager-hadoop3.out
[[email protected] ~]$ jps
17492 ResourceManager
16612 QuorumPeerMain
16712 JournalNode
17532 Jps
17356 NodeManager
16830 DataNode
[[email protected] ~]$ 
 


3、啟動 mapreduce 任務歷史伺服器
[[email protected] ~]$ mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/hadoop/apps/hadoop-2.7.5/logs/mapred-hadoop-historyserver-hadoop1.out
[[email protected] ~]$ jps
4016 NodeManager
2739 JournalNode
4259 Jps
3844 DFSZKFailoverController
2647 QuorumPeerMain
3546 DataNode
4221 JobHistoryServer
3407 NameNode
[[email protected] ~]$ 
 


4、檢視各主節點的狀態
HDFS

[[email protected] ~]$ hdfs haadmin -getServiceState nn1
standby
[[email protected] ~]$ hdfs haadmin -getServiceState nn2
active
[[email protected] ~]$ 


YARN

[[email protected] ~]$ yarn rmadmin -getServiceState rm1
standby
[[email protected] ~]$ yarn rmadmin -getServiceState rm2
active
[[email protected] ~]$ 


5、WEB介面進行檢視

HDFS

hadoop1

hadoop2

YARN

standby節點會自動跳到avtive節點

MapReduce歷史伺服器web介面

 

 叢集效能測試

1、幹掉 active namenode, 看看叢集有什麼變化
目前hadoop2上的namenode節點是active狀態,幹掉他的程序看看hadoop1上的standby狀態的namenode能否自動切換成active狀態

[[email protected] ~]$ jps
4032 QuorumPeerMain
4400 DFSZKFailoverController
4546 NodeManager
4198 DataNode
4745 Jps
4122 NameNode
4298 JournalNode
[[email protected] ~]$ kill -9 4122
--------------------- 
作者:wwyh520 
來源:CSDN 
原文:https://blog.csdn.net/wanbf123/article/details/81948026 
版權宣告:本文為博主原創文章,轉載請附上博文連結!