1. 程式人生 > >hadoop分布式安裝部署具體視頻教程(網盤附配好環境的CentOS虛擬機文件/hadoop配置文件)

hadoop分布式安裝部署具體視頻教程(網盤附配好環境的CentOS虛擬機文件/hadoop配置文件)

down hdf lan nag home 開機啟動 prop baidu ifcfg-eth

參考資源下載:
http://pan.baidu.com/s/1ntwUij3
視頻安裝教程:hadoop安裝.flv
VirtualBox虛擬機:hadoop.part1-part5.rar
hadoop文件:hadoop-2.2.0.tar.gz
hadoop配置文件:hadoop_conf.tar.gz
hadoop學習教程:煉數成金-hadoop


虛擬機下載安裝:
VirtualBox-4.3.12-93733-Win.exe
http://dlc.sun.com.edgesuite.net/virtualbox/4.3.12/VirtualBox-4.3.12-
93733-Win.exe
(Win7下啟動異常,可使用兼容模式打開)
可參考:http://www.cnblogs.com/xia520pi/archive/2012/05/16/2503949.html


0安裝前檢查


系統用戶:root root
登陸用戶:hadoop hadoop

關閉防火墻及不必要服務:
chkconfig iptables off
chkconfig ip6tables off
chkconfig postfix off
chkconfig bluetooth off

檢查sshd是否打開、防火墻是否關閉等
chkconfig --list

開機啟動文本方式(5窗體 3文本,改為文本模式加快系統啟動)
vi /etc/inittab

立馬關機/重新啟動
shutdown -h now
reboot -h now


1Hadoop集群規劃
NameNode
Hadoop1 192.168.1.111
DataNode
Hadoop1 192.168.1.111
Hadoop2 192.168.1.112
Hadoop3 192.168.1.113
軟件版本號
Java 7up21
Hadoop2.2.0


2樣板機安裝

安裝Jdk(省略)--Java 7up21
/usr/java/jdk1.7.0_21
安裝Hadoop(省略)--Hadoop2.2.0
/app/hadoop/hadoop220

Hadoop1 192.168.1.111 08:00:27:64:15:BA System eth0
Hadoop2 192.168.1.112 08:00:27:CD:A6:29 System eth0
Hadoop3 192.168.1.113 08:00:27:AD:BF:A9 System eth0


vi /etc/hosts
Hadoop1 192.168.1.111
Hadoop2 192.168.1.112
Hadoop3 192.168.1.113

新增hadoop用戶
group -g 1000 hadoop
useradd -u 2000 -g hadoop hadoop
passwd hadoop

改動環境變量
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_21
export JRE_HOME=/usr/java/jdk1.7.0_21/jre
export ANT_HOME=/app/ant192
export MAVEN_HOME=/app/maven305
export FINDBUGS_HOME=/app/findbugs202
export SCALA_HOME=/app/scala2104
export HADOOP_COMMON_HOME=/app/hadoop/hadoop220
export HADOOP_CONF_DIR=/app/hadoop/hadoop220/etc/hadoop
export YARN_CONF_DIR=/app/hadoop/hadoop220/etc/hadoop
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:${ANT_HOME}/bin:${MAVEN_HOME}/bin:${FINDBUGS_HOME}/bin:${SCALA_HOME}/bin:${HADOOP_COMMON_HOME}/bin:${HADOOP_COMMON_HOME}/sbin:$PATH

更新環境變量
source /etc/profile

切換到hadoop配置文件文件夾
cd /app/hadoop/hadoop220/etc/hadoop/

分別改動下面配置文件
vi slaves
vi core-site.xml
vi hdfs-site.xml
vi yarn-env.xml
vi mapred-site.xml
vi hadoop-env.sh
vi yarn-site.xml

vi slaves
hadoop1
hadoop2
hadoop3

vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:8000</value>
</property>
</configuration>

vi hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///app/hadoop/hadoop220/mydata/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///app/hadoop/hadoop220/mydata/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

vi yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop1</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.https.address</name>
<value>${yarn.resourcemanager.hostname}:8090</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

vi hadoop-env.sh
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.7.0_21



3集群安裝

VirtualBox虛擬機將hadoop1復制,分別為hadoop2和hadoop3
分別改動hadoop2和hadoop3的網卡地址、網絡地址、主機名稱
Hadoop1 192.168.1.111 08:00:27:64:15:BA System eth0
Hadoop2 192.168.1.112 08:00:27:CD:A6:29 System eth0
Hadoop3 192.168.1.113 08:00:27:AD:BF:A9 System eth0


分別改動hadoop2和hadoop3的網卡地址和網絡屬性信息
vi /etc/udev/rules.d//70-persistent-net.rules
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="08:00:27:64:15:ba", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
UUID=dc326328-8fb1-4e22-b8d1-f90a890e5f56
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
IPADDR=192.168.1.111
PREFIX=24
GATEWAY=192.168.1.1
DNS1=8.8.8.8
HWADDR=08:00:27:64:15:BA
LAST_CONNECT=1408499318

vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop1

設置hadoop1、hadoop2和hadoop3的/etc/hosts文件域名解析
vi /etc/hosts
192.168.1.111 hadoop1
192.168.1.112 hadoop2
192.168.1.113 hadoop3
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6


配置三臺server實現ssh無password登陸

切換到用戶根文件夾,生成hadoop用戶密鑰
su -hadoop
cd ~

生成其無password密鑰對,詢問其保存路徑時直接回車採用默認路徑。

生成的密鑰對:id_rsa和id_rsa.pub,默認存儲在"/home/hadoop/.ssh"文件夾下。
ssh-keygen -t rsa

在hadoop1節點上做例如以下配置。把id_rsa.pub追加到授權的key裏面去。


cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

改動文件"authorized_keys"
chmod 600 ~/.ssh/authorized_keys

依照以上方法分別設置hadoop2和hadoop3,最後使三臺hadoop機子的id_rsa.pub全部都追加到authorized_keys中。並保證三臺機子的/.ssh/authorized_keys同樣。
(如不清楚也可參考視頻中的具體設置)

用root用戶登錄server改動SSH配置文件"/etc/ssh/sshd_config"的下列內容。


RSAAuthentication yes # 啟用 RSA 認證
PubkeyAuthentication yes # 啟用公鑰私鑰配對認證方式
AuthorizedKeysFile .ssh/authorized_keys # 公鑰文件路徑(和上面生成的文件同)

service sshd restart
退出root登錄。使用hadoop普通用戶驗證是否成功。


ssh hadoop1

cd /app/hadoop/hadoop220

hadoop系統的文件格式化
bin/hdfs namenode -format

hadoop系統hdfs文件系統啟動/關閉
sbin/start-dfs.sh
sbin/stop-dfs.sh

hdfs文件系統測試
bin/hdfs dfs -ls /
bin/hdfs dfs -mkdir -p /dataguru/test
bin/hdfs dfs -ls /dataguru/test
bin/hdfs dfs -put LICENSE.txt /dataguru/test/
bin/hdfs dfs -ls /dataguru/test

hadoop系統啟動/關閉
sbin/start-all.sh
sbin/stop-all.sh

全部相關命令
distribute-exclude.sh start-all.cmd stop-all.sh
hadoop-daemon.sh start-all.sh stop-balancer.sh
hadoop-daemons.sh start-balancer.sh stop-dfs.cmd
hdfs-config.cmd start-dfs.cmd stop-dfs.sh
hdfs-config.sh start-dfs.sh stop-secure-dns.sh
httpfs.sh start-secure-dns.sh stop-yarn.cmd
mr-jobhistory-daemon.sh start-yarn.cmd stop-yarn.sh
refresh-namenodes.sh start-yarn.sh yarn-daemon.sh
slaves.sh stop-all.cmd yarn-daemons.sh


4問題備註
could only be replicated to 0 nodes, instead of 1
Hadoop DataNode不啟動的解決的方法:
原因:多次格式化hdfs後生成的namespaceIDs不兼容導致
1刪除全部節點下的logs文件夾和存放數據的文件夾mydata
rm -rf logs/*
rm -rf mydata/*
2又一次格式化hdfs
bin/hdfs namenode -format

hadoop分布式安裝部署具體視頻教程(網盤附配好環境的CentOS虛擬機文件/hadoop配置文件)