1. 程式人生 > >hadoop-從單節點偽分散式擴充套件為多節點分散式

hadoop-從單節點偽分散式擴充套件為多節點分散式

一、準備虛擬機器

複製第一臺虛擬機器,一共需要三個虛擬機器,配置好網路和主機名如下:

192.168.1.130 hadoop-server-00
192.168.1.131 hadoop-server-01
192.168.1.132 hadoop-server-02

保證3個虛擬機器之間可以相互ping通,如果網路不是很熟悉,可以啟動圖形化操作,在裡面設定好網路。

再配置00可以無密登入01和02,參考上一節即可。

二、設定01和02的hadoop

01和02如果是複製的第一臺虛擬機器就不需要重新配置了,否則需要拷貝00上的hadoop到其它兩個虛擬機器上

scp -r /usr/local/apps/hadoop-2.4
.1/ hadoop-server-01:/usr/local/apps/ scp -r /usr/local/apps/hadoop-2.4.1/ hadoop-server-02:/usr/local/apps/

如果是拷貝過去的,則需要刪除01和02的hadoop目錄下的tmp目錄,因為tmp目錄中的資料是00生成的,標識身份是00的,在01和02上分別執行:rm -rf tmp/,因為01和02是從節點,還要確保它倆的slave檔案中沒有內容

設定主節點00上的slaves(內容包括自己,否則啟動後主節點沒有datanode):vim /usr/local/apps/hadoop-2.4.1/etc/hadoop/slaves

hadoop-server-00
hadoop-server-01
hadoop-server-02

這樣在啟動主節點的時候,它就會自動啟動01和02兩個從節點。為了方便操作,把hadoop的路徑加入環境變數中:vim /etc/profile

export HADOOP_HOME=/usr/local/apps/hadoop-2.4.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

使配置生效:source /etc/profile

啟動:start-dfs.sh

三、警告處理

Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now

參考:https://blog.csdn.net/zxae86/article/details/45922821 解決

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

參考:https://blog.csdn.net/zxae86/article/details/45922821 沒有解決,因為給出的配置路徑是錯誤的(export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/")後面少了native,後來又參考了一篇才解決:

參考:http://blog.sina.com.cn/s/blog_3d9e90ad0102wqrp.html 解決

 解決方案中要下載:hadoop-native-64-2.4.0.tar 檔案,地址為:(http://dl.bintray.com/sequenceiq/sequenceiq-bin/hadoop-native-64-2.4.1.tar),備用地址:https://pan.baidu.com/s/1dxEkGod73wug1yqYpx2MGg 提取碼:dhqg

四、啟動

 

[[email protected]00 hadoop]# start-dfs.sh 
Starting namenodes on [hadoop-server-00]
hadoop-server-00: starting namenode, logging to /usr/local/apps/hadoop-2.4.1/logs/hadoop-root-namenode-hadoop-server-00.out
hadoop-server-00: starting datanode, logging to /usr/local/apps/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop-server-00.out
hadoop-server-01: starting datanode, logging to /usr/local/apps/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop-server-01.out
hadoop-server-02: starting datanode, logging to /usr/local/apps/hadoop-2.4.1/logs/hadoop-root-datanode-hadoop-server-02.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/apps/hadoop-2.4.1/logs/hadoop-root-secondarynamenode-hadoop-server-00.out
[[email protected]-server-00 hadoop]# jps
7041 SecondaryNameNode
7138 Jps
6841 DataNode
6748 NameNode
[[email protected]-server-00 hadoop]# start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/local/apps/hadoop-2.4.1/logs/yarn-root-resourcemanager-hadoop-server-00.out
hadoop-server-02: starting nodemanager, logging to /usr/local/apps/hadoop-2.4.1/logs/yarn-root-nodemanager-hadoop-server-02.out
hadoop-server-01: starting nodemanager, logging to /usr/local/apps/hadoop-2.4.1/logs/yarn-root-nodemanager-hadoop-server-01.out
hadoop-server-00: starting nodemanager, logging to /usr/local/apps/hadoop-2.4.1/logs/yarn-root-nodemanager-hadoop-server-00.out
[[email protected]-server-00 hadoop]# jps
7041 SecondaryNameNode
7346 NodeManager
6841 DataNode
7610 Jps
6748 NameNode
7245 ResourceManager

可以看到5個程序都成功啟動,並且之前的警告都沒了。