一、zookeeper叢集的安裝

(我親自成功搭建過的一套叢集,涉及到很多的細節,但是有的細節地方我不一定能完全寫出,如果有遇到問題的可以留言)

前提準備3臺centos7.0虛擬機器

(1)首先設定每臺虛擬機器的網路連線方式為net方式,然後修改每臺虛擬機器的IP地址為靜態IP(虛擬機器設定裡面),然後修改每臺主機的主機名(/etc/hostname),最後把下面的IP和主機名的對映寫到每臺虛擬機器的/etc/hosts檔案中,注意是在每臺機器上都執行。具體如下(本篇文章重點在搭建環境,上述準備工作不會的請自行百度。一定更要確保這一步做完,否則後邊會出現很多問題。)
192.168.70.103:node1

192.168.70.104:node2

192.168.70.105:node3
(2)下載zookeeper並解壓
登入到node1並進入/opt目錄下執行如下命令
[root@node1 opt]$ wget http://apache.fayea.com/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
[root@node1 opt]$ tar -zxvf zookeeper-3.4.10.tar.gz
[root@node1 opt]$ chmod +wxr zookeeper-3.4.10
(3)修改zookeeper的配置檔案,並建立資料目錄和日誌目錄
[root@node1 opt]$ cd zookeeper-3.4.10
[root@node1 zookeeper-3.4.10]$ mkdir data
[root@node1 zookeeper-3.4.10]$ mkdir logs
[root@node1 zookeeper-3.4.10]$ vi conf/zoo.cfg (開啟後在檔案中找到下面的欄位然後照著寫)
# example sakes.
dataDir=/opt/zookeeper-3.4.10/data
dataLogDir=/opt/zookeeper-3.4.10/logs
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888

[root@node1 zookeeper-3.4.10]$ cd data
[root@node1 data]$ vi myid
1
(4)複製node3的zookeeper-3.4.10到node2和node3上
[root@node1 opt]$ scp zookeeper-3.4.10 root@node2:/opt/zookeeper-3.4.10
[root@cnode1 opt]$ scp zookeeper-3.4.10 root@node3:/opt/zookeeper-3.4.10
(5)分別修改node2和node3上myid的值為23
[root@node2 zookeeper-3.4.10]$ vi data/myid 
2
[root@node3 zookeeper-3.4.10]$ vi data/myid 
3
(6)分別啟動node1、node2、node3上的zookeeper
[root@node1 zookeeper-3.4.10]$ bin/zkServer.sh start
[root@node2 zookeeper-3.4.10]$ bin/zkServer.sh start
[root@node3 zookeeper-3.4.10]$ bin/zkServer.sh start
(7)檢視zookeeper的狀態(一定要在三臺虛擬機器上全部啟動zookeeper後再去檢視zookeeper的狀態,相信學過zookeeper的知道這裡說的"狀態"是什麼意思)

[[email protected] zookeeper-3.4.10]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
[[email protected] zookeeper-3.4.10]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader
[[email protected] zookeeper-3.4.10]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower

(8)驗證zookeeper叢集
[[email protected] zookeeper-3.4.10]$ bin/zkCli.sh
Connecting to c7003:2181
2017-04-02 03:06:12,251 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
2017-04-02 03:06:12,257 [myid:] - INFO [main:Environment@100] - Client environment:host.name=c7003.ambari.apache.org
2017-04-02 03:06:12,257 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.8.0_121
2017-04-02 03:06:12,260 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2017-04-02 03:06:12,260 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/opt/jdk1.8.0_121/jre
2017-04-02 03:06:12,260 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/opt/zookeeper-3.4.10/bin/../build/classes:/opt/zookeeper-3.4.10/bin/../build/lib/*.jar:/opt/zookeeper-3.4.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/zookeeper-3.4.10/bin/../lib/slf4j-api-1.6.1.jar:/opt/zookeeper-3.4.10/bin/../lib/netty-3.10.5.Final.jar:/opt/zookeeper-3.4.10/bin/../lib/log4j-1.2.16.jar:/opt/zookeeper-3.4.10/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.10/bin/../zookeeper-3.4.10.jar:/opt/zookeeper-3.4.10/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.10/bin/../conf:
2017-04-02 03:06:12,260 [myid:] - INFO [main:[email protected]] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-04-02 03:06:12,260 [myid:] - INFO [main:[email protected]] - Client environment:java.io.tmpdir=/tmp
2017-04-02 03:06:12,260 [myid:] - INFO [main:[email protected]] - Client environment:java.compiler=<NA>
2017-04-02 03:06:12,260 [myid:] - INFO [main:[email protected]] - Client environment:os.name=Linux
2017-04-02 03:06:12,260 [myid:] - INFO [main:[email protected]] - Client environment:os.arch=amd64
2017-04-02 03:06:12,261 [myid:] - INFO [main:[email protected]] - Client environment:os.version=4.1.12-32.el7uek.x86_64
2017-04-02 03:06:12,261 [myid:] - INFO [main:[email protected]] - Client environment:user.name=vagrant
2017-04-02 03:06:12,261 [myid:] - INFO [main:[email protected]] - Client environment:user.home=/home/vagrant
2017-04-02 03:06:12,261 [myid:] - INFO [main:[email protected]] - Client environment:user.dir=/opt/zookeeper-3.4.10
2017-04-02 03:06:12,262 [myid:] - INFO [main:[email protected]] - Initiating client connection, connectString=c7003:2181 sessionTimeout=30000 [email protected]
Welcome to ZooKeeper!

ls /
[zookeeper, zk_test] 
至此:zookeeper叢集安裝完畢!

二、hadoop HA分散式叢集搭建

概述

hadoop2中NameNode可以有多個(目前只支援2個)。每一個都有相同的職能。一個是active狀態的,一個是standby狀態的。當叢集執行時,只有active狀態的NameNode是正常工作的,standby狀態的NameNode是處於待命狀態的,時刻同步active狀態NameNode的資料。一旦active狀態的NameNode不能工作,standby狀態的NameNode就可以轉變為active狀態的,就可以繼續工作了。
2個NameNode的資料其實是實時共享的。新HDFS採用了一種共享機制,Quorum Journal Node(JournalNode)叢集或者Nnetwork File System(NFS)進行共享。NFS是作業系統層面的,JournalNode是hadoop層面的,我們這裡使用JournalNode叢集進行資料共享(這也是主流的做法)。
兩個NameNode為了資料同步,會通過一組稱作JournalNodes的獨立程序進行相互通訊。當active狀態的NameNode的名稱空間有任何修改時,會告知大部分的JournalNodes程序。standby狀態的NameNode有能力讀取JNs中的變更資訊,並且一直監控edit log的變化,把變化應用於自己的名稱空間。standby可以確保在叢集出錯時,名稱空間狀態已經完全同步了。
對於HA叢集而言,確保同一時刻只有一個NameNode處於active狀態是至關重要的。否則,兩個NameNode的資料狀態就會產生分歧,可能丟失資料,或者產生錯誤的結果。為了保證這點,這就需要利用使用ZooKeeper了。首先HDFS叢集中的兩個NameNode都在ZooKeeper中註冊,當active狀態的NameNode出故障時,ZooKeeper能檢測到這種情況,它就會自動把standby狀態的NameNode切換為active狀態。
hadoop-ha包含HDFSHAYARNHA,下面就2個部件的HA進行搭建。

環境介紹:

os:centos7.0

hadoop:2.8.0

zookeeper:3.4.10
5臺虛擬機器,各服務部署情況如下:
master1

ip:192.168.70.101

安裝軟體:Hadoop(HA)

執行程序:NameNodeResourceManagerDFSZKFailoverController
master2

ip:192.168.70.102

安裝軟體:Hadoop(HA)

執行程序:NameNodeResourceManagerDFSZKFailoverController
node1

ip:192.168.70.103

安裝軟體:Hadoop,Zookeeper

執行程序:DataNodeNodeManagerQuorumPeerMainJournalNode
node2

ip:192.168.70.104

安裝軟體:Hadoop,Zookeeper

執行程序:DataNodeNodeManagerQuorumPeerMainJournalNode
node3

ip:192.168.70.105

安裝軟體:Hadoop,Zookeeper

執行程序:DataNodeNodeManagerQuorumPeerMainJournalNode
(1)準備工作<***這是出問題最多的,如果遇到問題請百度或者留言***>
新建兩臺虛擬機器,分別修改主機名和配置靜態IP如下,和上文的zookeeper搭建時用的主機差不多
192.168.70.101:master1
192.168.70.101:master2

在master1上修改/etc/hosts檔案如下
192.168.70.101:master1
192.168.70.102:master2
192.168.70.103:node1
192.168.70.104:node2
192.168.70.105:node3

免密登入配置
在master1的根目錄下敲入:ssh-keygen -t rsa
然後一路回車
接著敲入:
ssh-copy-id -i ~/.ssh/id_rsa.pub root@master1
ssh-copy-id -i ~/.ssh/id_rsa.pub root@master2
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node3
現在實現的是master1登入到其他虛擬機器時不需要密碼。
如果要實現master1、master2、node1、node2、node3任意之間的無密碼登入,分別再每臺機器上重複上述步驟即可。
(2)在master1、master2、node1、node2、node3的/opt目錄下安裝jdk,並設定環境變數
cd /etc/
wget http://download.oracle.com/otn-pub/java/jdk/8u121-b13/e9e7ea248e2c4826b92b3f075a80e441/jdk-8u121-linux-x64.tar.gz?AuthParam=1491205869_4d911aca9d38a4b869d2a6ecaa9bbf47

tar zxvf jdk-8u121-linux-x64.tar.gz

vi ~/.bash_profile
export JAVA_HOME=/opt/jdk1.8.0_121
export PATH=$PATH:$JAVA_HOME/bin
(3)安裝Hadoop叢集

下載並解壓hadoop

在master1、master2、node1、node2、node3的終端目錄/opt下執行如下命令:
wget http://219.238.4.196/files/705200000559DFDC/apache.communilink.net/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz

然後再把各個機器上的hadoop解壓  

tar zxvf  hadoop-2.8.0.tar.gz
(4)在master1終端修改hadoop配置檔案,這裡需要修改的有core-site.xml、hdfs-site.xml、mapreduce-site.xml、yarn-site.xml、hadoop-env.sh、mapred-env.sh、yarn-env.sh這7個檔案
core-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

 <property>

  <name>fs.defaultFS</name>

  <value>hdfs://bdcluster</value>

 </property>

 <!-- 指定hadoop臨時目錄 -->

 <property>

  <name>hadoop.tmp.dir</name>

  <value>/opt/hadoop-2.8.0/tmp</value>

 </property>

 <!-- 指定zookeeper地址 -->

 <property>

  <name>ha.zookeeper.quorum</name>

  <value>node1:2181,node2:2181,node3:2181</value>

 </property>

 <property>

  <name>ha.zookeeper.session-timeout.ms</name>

  <value>3000</value>

 </property>

</configuration>
hdfs-site.xml 

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

 <!--指定hdfs的nameservice為bdcluster,需要和core-site.xml中的保持一致 -->

 <property>

  <name>dfs.nameservices</name>

  <value>bdcluster</value>

 </property>

 <!-- bdcluster下面有兩個NameNode,分別是nn1,nn2 -->

 <property>

  <name>dfs.ha.namenodes.bdcluster</name>

  <value>nn1,nn2</value>

 </property>

 <!-- nn1的RPC通訊地址 -->

 <property>

  <name>dfs.namenode.rpc-address.bdcluster.nn1</name>

  <value>master1:9000</value>

 </property>

 <!-- nn2的RPC通訊地址 -->

 <property>

  <name>dfs.namenode.rpc-address.bdcluster.nn2</name>

  <value>master2:9000</value>

 </property>

 <!-- nn1的http通訊地址 -->

 <property>

  <name>dfs.namenode.http-address.bdcluster.nn1</name>

  <value>master1:50070</value>

 </property>

 <!-- nn2的http通訊地址 -->

 <property>

  <name>dfs.namenode.http-address.bdcluster.nn2</name>

  <value>master2:50070</value>

 </property>

 <!-- 指定NameNode的元資料在JournalNode上的存放位置 -->

 <property>

  <name>dfs.namenode.shared.edits.dir</name>

  <value>qjournal://node1:8485;node2:8485;node3:8485/bdcluster</value>

 </property>

 <!-- 指定JournalNode在本地磁碟存放資料的位置 -->

 <property>

  <name>dfs.journalnode.edits.dir</name>

  <value>/opt/hadoop-2.8.0/tmp/journal</value>

 </property>

 <property>

  <name>dfs.ha.automatic-failover.enabled</name>

  <value>true</value>

 </property>

 <!-- 配置失敗自動切換實現方式 -->

 <property>

  <name>dfs.client.failover.proxy.provider.bdcluster</name>

  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider

  </value>

 </property>

 <!-- 配置隔離機制,多個機制用換行分割,即每個機制暫用一行 -->

 <property>

  <name>dfs.ha.fencing.methods</name>

  <value>

   sshfence

   shell(/bin/true)

  </value>

 </property>

 <!-- 使用sshfence隔離機制時需要ssh免密碼登陸 -->

 <property>

  <name>dfs.ha.fencing.ssh.private-key-files</name>

  <value>/root/.ssh/id_rsa</value>

 </property>

 <!-- 配置sshfence隔離機制超時時間 -->

 <property>

  <name>dfs.ha.fencing.ssh.connect-timeout</name>

  <value>30000</value>

 </property>

 <!--指定namenode名稱空間的儲存地址 -->

 <property>

  <name>dfs.namenode.name.dir</name>

  <value>file:///opt/hadoop-2.8.0/hdfs/name</value>

 </property>

 <!--指定datanode資料儲存地址 -->

 <property>

  <name>dfs.datanode.data.dir</name>

  <value>file:///opt/hadoop-2.8.0/hdfs/data</value>

 </property>

 <!--指定資料冗餘份數 -->

 <property>

  <name>dfs.replication</name>

  <value>3</value>

 </property>

</configuration>
mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

 <property>

  <name>mapreduce.framework.name</name>

  <value>yarn</value>

 </property>

 <!-- 配置 MapReduce JobHistory Server 地址 ,預設埠10020 -->

 <property>

  <name>mapreduce.jobhistory.address</name>

  <value>0.0.0.0:10020</value>

 </property>

 <!-- 配置 MapReduce JobHistory Server web ui 地址, 預設埠19888 -->

 <property>

  <name>mapreduce.jobhistory.webapp.address</name>

  <value>0.0.0.0:19888</value>

 </property>

</configuration>
yarn-site.xml

<?xml version="1.0"?>

<configuration>

 <!--開啟resourcemanagerHA,預設為false -->

 <property>

  <name>yarn.resourcemanager.ha.enabled</name>

  <value>true</value>

 </property>

 <!--開啟自動恢復功能 -->

 <property>

  <name>yarn.resourcemanager.recovery.enabled</name>

  <value>true</value>

 </property>

 <!-- 指定RM的cluster id -->

 <property>

  <name>yarn.resourcemanager.cluster-id</name>

  <value>yrc</value>

 </property>

 <!--配置resourcemanager -->

 <property>

  <name>yarn.resourcemanager.ha.rm-ids</name>

  <value>rm1,rm2</value>

 </property>

 <!-- 分別指定RM的地址 -->

 <property>

  <name>yarn.resourcemanager.hostname.rm1</name>

  <value>master1</value>

 </property>

 <property>

  <name>yarn.resourcemanager.hostname.rm2</name>

  <value>master2</value>

 </property>

 <!-- <property> <name>yarn.resourcemanager.ha.id</name> <value>rm1</value> 

  <description>If we want to launch more than one RM in single node,we need 

  this configuration</description> </property> -->

 <!-- 指定zk叢集地址 -->

 <property>

  <name>ha.zookeeper.quorum</name>

  <value>node1:2181,node2:2181,node3:2181</value>

 </property>

 !--配置與zookeeper的連線地址-->

 <property>

  <name>yarn.resourcemanager.zk-state-store.address</name>

  <value>node1:2181,node2:2181,node3:2181</value>

 </property>

 <property>

  <name>yarn.resourcemanager.store.class</name>

  <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore

  </value>

 </property>

 <property>

  <name>yarn.resourcemanager.zk-address</name>

  <value>node1:2181,node2:2181,node3:2181</value>

 </property>

 <property>

  <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name>

  <value>/yarn-leader-election</value>

  <description>Optionalsetting.Thedefaultvalueis/yarn-leader-election

  </description>

 </property>

 <property>

  <name>yarn.nodemanager.aux-services</name>

  <value>mapreduce_shuffle</value>

 </property>

</configuration>
hadoop-env.sh 和 mapred-env.sh 和 yarn-env.sh

export JAVA_HOME=/opt/jdk1.8.0_121
export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib 
export HADOOP_HOME=/opt/hadoop-2.8.0
export HADOOP_PID_DIR=/opt/hadoop-2.8.0/pids 
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native 
export HADOOP_OPTS="$HADOOP_OPTS-Djava.library.path=$HADOOP_HOME/lib/native" 
export HADOOP_PREFIX=$HADOOP_HOME 
export HADOOP_MAPRED_HOME=$HADOOP_HOME 
export HADOOP_COMMON_HOME=$HADOOP_HOME 
export HADOOP_HDFS_HOME=$HADOOP_HOME 
export YARN_HOME=$HADOOP_HOME 
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop 
export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop 
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop 
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native 
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
5)通過執行如下命令將master1修改好的配置檔案同步到master2、node1、node2、node3:

scp -r /opt/hadoop-2.8.0/etc/hadoop root@master2:/opt/hadoop-2.8.0/etc/
scp -r /opt/hadoop-2.8.0/etc/hadoop root@node1:/opt/hadoop-2.8.0/etc/
scp -r /opt/hadoop-2.8.0/etc/hadoop root@node2:/opt/hadoop-2.8.0/etc/
scp -r /opt/hadoop-2.8.0/etc/hadoop root@node3:/opt/hadoop-2.8.0/etc/
至此,hadoop的配置檔案已經全部配置完畢
(6)啟動zookeeper叢集
分別在node1、node2、node3上執行如下命令啟動zookeeper叢集;
[vagrant@node1 bin]$ sh zkServer.sh start
驗證叢集zookeeper叢集是否啟動,分別在node1、node2、node3上執行如下命令驗證zookeeper叢集是否啟動,叢集啟動成功,有兩個follower節點跟一個leader節點;
[root@node1 bin]$ sh zkServer.sh status
JMX enabled by default
Using config: /opt/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
(7) 啟動journalnode叢集
在master1上執行如下命令完成JournalNode叢集的啟動
[root@master1 hadoop-2.8.0]$ sbin/hadoop-daemons.sh start journalnode
執行jps命令,可以檢視到JournalNode的java程序pid
(8)格式化zkfc,讓在zookeeper中生成ha節點
在master1上執行如下命令,完成格式化
hdfs zkfc -formatZK
格式成功後,檢視zookeeper中可以看到
[zk: localhost:2181(CONNECTED) 1] ls /hadoop-ha
[bdcluster]
(9)格式化hdfs
在master1上執行如下命令
hadoop namenode -format
10)啟動NameNode
首先在master1上啟動active節點,在master1上執行如下命令
[root@master1 hadoop-2.8.0]$ sbin/hadoop-daemon.sh start namenode
在master2上同步namenode的資料,同時啟動standby的namenod,命令如下
#把NameNode的資料同步到c7002上
[root@master2 hadoop-2.8.0]$ bin/hdfs namenode -bootstrapStandby
#啟動master2上的namenode作為standby
[root@master2 hadoop-2.8.0]$ sbin/hadoop-daemon.sh start namenode
(11)啟動啟動datanode
在master1上執行如下命令
[root@master1 hadoop-2.8.0]$ sbin/hadoop-daemons.sh start datanode
(12) 啟動yarn
在作為資源管理器上的機器上啟動,我這裡是master1,執行如下命令完成year的啟動
[root@master1 hadoop-2.8.0]$ sbin/start-yarn.sh
(12) 啟動ZKFC
在master1上執行如下命令,完成ZKFC的啟動
[root@master1 hadoop-2.8.0]$ sbin/hadoop-daemons.sh start zkfc
(13)全部啟動完後分別在master1、master2、node1、node2、node3上執行jps是可以看到下面這些程序的
master1

ip:192.168.70.101

安裝軟體:Hadoop(HA)

執行程序:NameNodeResourceManagerDFSZKFailoverController
master2

ip:192.168.70.102

安裝軟體:Hadoop(HA)

執行程序:NameNodeResourceManagerDFSZKFailoverController
node1

ip:192.168.70.103

安裝軟體:Hadoop,Zookeeper

執行程序:DataNodeNodeManagerQuorumPeerMainJournalNode
node2

ip:192.168.70.104

安裝軟體:Hadoop,Zookeeper

執行程序:DataNodeNodeManagerQuorumPeerMainJournalNode
node3

ip:192.168.70.105

安裝軟體:Hadoop,Zookeeper

執行程序:DataNodeNodeManagerQuorumPeerMainJournalNode
14)ResourceManager HA

NameNode HA操作完之後我們可以發現只有一個節點(這裡是master1)啟動,需要手動啟動另外一個節點(master2)的resourcemanager。
sbin/yarn-daemon.sh start resourcemanager
然後用以下指令檢視resourcemanager狀態
yarn rmadmin -getServiceState rm1
結果顯示Active

yarn rmadmin -getServiceState rm2
而rm2是standby。
驗證HA和NameNode HA同理,kill掉Active resourcemanager,則standby的resourcemanager則會轉換為Active。
還有一條指令可以強制轉換
yarn rmadmin –transitionToStandby rm1
注意:一定要在master2上修改yarn-site.xml中的mr1為mr2

<property>
    <name>yarn.resourcemanager.ha.id</name>
    <value>rm2</value>
    <description>If we want to launch more than one RM in single node,we need this configuration</description>
 </property>

在master1上配置是rm1,而在master2上一定要修改為rm2,如果不修改,c7002的resourcemanager啟動不了。 
.