apache hadoop-2.6.0-CDH5.4.1 安裝
apache hadoop-2.6.0-CDH5.4.1 安裝
1.安裝Oracle Java 8
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
sudo vi /etc/profile
#set java environment
export JAVA_HOME=/usr/lib/jvm/oracle-java8-installer
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME} /lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
- source /etc/profile
驗證java環境是否配好
java -version
2.實驗機器角色分配
主機名 | ip | 角色 |
---|---|---|
Master | 10.5.0.196 | MameNode、ResourceManager、SecondaryNameNode |
Slave1 | 10.5.0.231 | DataNode、NodeManager |
Slave2 | 10.5.0.232 | DataNode、NodeManager |
Slave3 | 10.5.0.233 | DataNode、NodeManager |
配置Master
修改 /etc/hostname 內容為Master (Slave1為Slave1…)
/etc/hosts 新增內容行
10.5.0.196 Master
10.5.0.231 Slave1
10.5.0.232 Slave2
10.5.0.233 Slave3
3.建立hadoop使用者
useradd hadoop
passwd hadoop
設定管理員許可權
vi /etc/sudoers
`#` User privilege specification
root ALL=(ALL:ALL) ALL
hadoop ALL=(ALL:ALL) ALL
4. Master上配置SSH免密碼登入
sudo apt-get install openssh-server
sudo su hadoop
cd /home/hadoop
ssh-keygen -t rsa(一路回車 生成金鑰)
cd .ssh
cp id_rsa.pub authorized_keys 或者 cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
將公鑰拷到Slave上
ssh-copy-id -i $HOME/.ssh/id_rsa.pub [email protected] #或者 scp authorized_keys [email protected]:/home/hadoop/.ssh/
5.下載、安裝hadoop-cdh
wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.4.1.tar.gz
tar -zxvf hadoop-2.6.0-cdh5.4.1.tar.gz -C /usr/local/cloudera/hadoop-2.6.0-cdh5.4.1
sudo chown hadoop:hadoop -R /usr/local/cloudera
sudo mkdir -p /usr/local/cloudera/hadoop_tmp/hdfs/namenode
sudo mkdir -p /usr/local/cloudera/hadoop_tmp/hdfs/datanode
sudo chown hadoop:hadoop -R /usr/local/hadoop_tmp
配置 ~/.bashrc
sudo vi $HOME/.bashrc
新增下列內容到檔案尾部
`#`HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/oracle-java8-installer
export HADOOP_HOME=/usr/local/cloudera/hadoop-2.6.0-cdh5.4.1
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
`#`HADOOP VARIABLES END
- source ~/.bashrc
cd $HADOOP_HOME/etc/hadoop
sudo vi hadoop-env.sh
修改JAVA_HOME的值
export JAVA_HOME=/usr/lib/jvm/oracle-java8-installer
sudo vi slaves
Slave1
Slave2
Slave3
sudo vi masters
Master
6.配置.XML檔案
core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml
1. core-site.xml
<configuration>
<!-- file system properties -->
<property>
<name>fs.default.name</name>
<value>hdfs://Master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
</configuration>
- 2. hdfs-site.xml
/hdfs/namenode、/hdfs/datanode 目錄需要自己新建
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/cloudera/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanoe.data.dir</name>
<value>file:/usr/local/cloudera/hadoop_tmp/hdfs/datanode</value>
</property>
</configuration>
- 3. yarn-site.xml
<!-- Site specific YARN configuration properties -->
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>Master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>Master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>Master:8035</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>Master:8050</value>
</property>
</configuration>
- 4. mapred-site.xml
<configuration>
<property>
<name>mapreduce.job.tracker</name>
<value>Master:5431</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
- 基本項配置完畢
7.同步hadoop配置檔案到其他的Slave節點
sudo apt-get install rsync
sudo rsync -avxP /usr/local/cloudera/ [email protected]:/usr/local/cloudera/
sudo rsync -avxP /usr/local/cloudera/ [email protected]:/usr/local/cloudera/
sudo rsync -avxP /usr/local/cloudera/ [email protected]:/usr/local/cloudera/
8.Hadoop 啟動和驗證
1. 格式化分散式檔案系統
hadoop namenode -format
最後顯示如下內容表示成功
15/05/15 22:48:01 INFO util.ExitUtil: Exiting with status 0
15/05/15 22:48:01 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Master/10.5.0.196
************************************************************/
- 2. 啟動HDFS
./sbin/start-dfs.sh
3. 啟動YARN
./sbin/start-yarn.sh
4. 通過WebUI檢視Hadoop狀態
訪問http://Master:50070
9.安裝過程存在的問題
因為 hadoop-2.6.0-cdh5.4.1/lib/native 的靜態庫檔案不存在
發現過程
執行hadoop命令的時候會出現如下警告資訊
15/05/17 10:46:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- 開啟除錯模式
export HADOOP_ROOT_LOGGER=DEBUG,console
在執行hadoop命令的時候出現如下資訊
15/05/17 16:46:48 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
15/05/17 16:46:48 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
15/05/17 16:46:48 DEBUG util.NativeCodeLoader: java.library.path=/usr/local/cloudera/hadoop-2.6.0-cdh5.4.1/lib/native
15/05/17 16:46:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/05/17 16:46:48 DEBUG util.PerformanceAdvisory: Falling back to shell based
- cd /usr/local/cloudera/hadoop-2.6.0-cdh5.4.1/lib/native
發現hadoop的本地庫檔案不存在
兩種解決辦法
- 下載hadoop-2.6.0-cdh5.4.1-src.tar.gz原始碼 本地編譯 (maven編譯,詳見裡面的BUIDING檔案,這個過程是很漫長的,我的機器是64bit)
- 下載apache-hadoop-2.6.0-src.tar.gz,裡面有lib/native本地庫檔案,複製過去就可以了
10. Hadoop 叢集測試
執行$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.4.1.jar 這個架包裡的WordCount程式,作用是統計單詞的個數。
1. 隨便找一些文字檔案
mkdir file
cp *.txt /file #將找到的txt檔案都複製到input目錄
2. 在HDFS系統裡建立一個input資料夾
hadoop fs -mkdir /input
3. 把file目錄下的txt檔案上傳到HDFS系統的input資料夾下
hadoop fs -put ./file/*.txt /input/
4. 檢視檔案是否上傳成功
hadoop fs -ls /input/
5. 執行hadoop-mapreduce-examples-2.6.0-cdh5.4.1.jar架包中wordcount程式
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.4.1.jar wordcount /input/ /output/
6. 檢視執行結果
hadoop fs -ls /output
-rw-r--r-- 1 hadoop supergroup 0 2015-05-17 12:27 /output/wordcount/_SUCCESS
-rw-r--r-- 1 hadoop supergroup 9190 2015-05-17 12:27 /output/wordcount/part-r-00000
- hadoop fs -cat /output/part-r-00000