1. 程式人生 > >hadoop 搭建3節點叢集,遇到Live Nodes顯示為0時解決辦法

hadoop 搭建3節點叢集,遇到Live Nodes顯示為0時解決辦法

首先,尼瑪哥在搭建hadoop 的3節點叢集時,安裝基本的步驟,配置好以下幾個檔案

  1. core-site.xml
  2. hadoop-env.sh
  3. hdfs-site.xml
  4. yarn-env.sh
  5. yarn-site.xml
  6. slaves

之後就是格式化NameNode節點,

[root@spark1 hadoop]# hdfs namenode -format

啟動hdfs叢集

[root@spark1 hadoop]# start-dfs.sh

查詢各個節點是否執行成功。
spark1 :

[root@spark1 hadoop]# jps
5575 SecondaryNameNode
5722 Jps 5443 DataNode 5336 NameNode

spark2:

[root@spark2 hadoop]# jps
1859 Jps
1795 DataNode

spark3:

[root@spark3 ~]# jps
1748 DataNode
1812 Jps

尼瑪哥的叢集搭建過程核心檔案配置沒問題,可是,就是在使用50070埠檢測的時候,顯示livenode為1 ,而且只是spark1 !

於是,經過對問題的排查,發現,最終因為前期配置/etc/ hosts 的時候,配置分別為:

spark1:

127.0.0.1   localhost localhost.localdomain
localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.30.111 spark1

spark2:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain
6 192.168.30.112 spark2

spark3:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.113  spark3

現在,統一改為:


127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.113  spark3
192.168.30.111  spark1
192.168.30.112  spaqk2

ok ,問題得到解決
利用 程式碼 :

[root@spark1 hadoop]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 55609774080 (51.79 GB)
Present Capacity: 47725793280 (44.45 GB)
DFS Remaining: 47725719552 (44.45 GB)
DFS Used: 73728 (72 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)

Live datanodes:
Name: 192.168.30.111:50010 (spark1)
Hostname: spark1
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2628579328 (2.45 GB)
DFS Remaining: 15907987456 (14.82 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.82%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Wed Aug 09 05:03:06 CST 2017


Name: 192.168.30.113:50010 (spark3)
Hostname: spark3
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2627059712 (2.45 GB)
DFS Remaining: 15909507072 (14.82 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.83%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Wed Aug 09 05:03:05 CST 2017


Name: 192.168.30.112:50010 (spark2)
Hostname: spark2
Decommission Status : Normal
Configured Capacity: 18536591360 (17.26 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 2628341760 (2.45 GB)
DFS Remaining: 15908225024 (14.82 GB)
DFS Used%: 0.00%
DFS Remaining%: 85.82%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Wed Aug 09 05:03:05 CST 2017

程式碼部分,可以看出,連線的datanode 為3個。
分別為 192.168.30.111
192.168.30.112
192.168.30.113