1. 程式人生 > >Hadoop用戶重新部署HDFS

Hadoop用戶重新部署HDFS

Hadoop

前言:
在這篇文章中https://www.jianshu.com/p/eeae2f37a48c
我們使用的是root用戶來部署的,在生產環境中,一般某個組件是由某個用戶來啟動的,本篇文章介紹下怎樣用hadoop用戶來重新部署偽分布式(HDFS)

1.前期準備

創建hadoop用戶,及配置ssh免密登錄
參考:https://www.jianshu.com/p/589bb43e0282

2.停止root啟動的HDFS進程並刪除/tmp目錄下的存儲文件
[root@hadoop000 hadoop-2.8.1]# pwd
/opt/software/hadoop-2.8.1
[root@hadoop000 hadoop-2.8.1]# jps
32244 NameNode
32350 DataNode
32558 SecondaryNameNode
1791 Jps
[root@hadoop000 hadoop-2.8.1]# sbin/stop-dfs.sh 
Stopping namenodes on [hadoop000]
hadoop000: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
[root@hadoop000 hadoop-2.8.1]# jps
2288 Jps
[root@hadoop000  hadoop-2.8.1]# rm -rf /tmp/hadoop-* /tmp/hsperfdata_*
3.更改文件屬主
[root@hadoop000 software]# pwd
/opt/software
[root@hadoop000 software]# chown -R hadoop:hadoop hadoop-2.8.1
4.進入hadoop用戶 修改相關配置文件
#第一步:
[hadoop@hadoop000 hadoop]$ pwd
/opt/software/hadoop-2.8.1/etc/hadoop
[hadoop@hadoop000 hadoop]$ vi hdfs-site.xml 
<configuration&gt;
<property&gt;
<name&gt;dfs.replication&lt;/name&gt;
<value&gt;1&lt;/value&gt;
</property&gt;
<property&gt;
<name&gt;dfs.namenode.secondary.http-address&lt;/name&gt;
<value&gt;192.168.6.217:50090&lt;/value&gt;
</property&gt;
<property&gt;
<name&gt;dfs.namenode.secondary.https-address&lt;/name&gt;
<value&gt;192.168.6.217:50091&lt;/value&gt;
</property&gt;
</configuration&gt;
#第二步:
[hadoop@hadoop000 hadoop]$ vi core-site.xml 
<configuration&gt;
<property&gt;
<name&gt;fs.defaultFS&lt;/name&gt;
<value&gt;hdfs://192.168.6.217:9000&lt;/value&gt;
</property&gt;
</configuration&gt;
#第三步:
[hadoop@hadoop000 hadoop]# vi slaves 
192.168.6.217
5.格式化和啟動
[hadoop@hadoop000 hadoop-2.8.1]$ pwd
/opt/software/hadoop-2.8.1
[hadoop@hadoop000 hadoop-2.8.1]$ bin/hdfs namenode -format
[hadoop@hadoop000 hadoop-2.8.1]$ sbin/start-dfs.sh
Starting namenodes on [hadoop000]
hadoop000: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop000.out
192.168.6.217: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop000.out
Starting secondary namenodes [hadoop000]
hadoop000: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop000.out
[hadoop@hadoop000 hadoop-2.8.1]$ jps
3141 Jps
2806 DataNode
2665 NameNode
2990 SecondaryNameNode
#至此發現HDFS三個進程都是以hadoop000啟動,

Hadoop用戶重新部署HDFS