1. 程式人生 > >CentOS6.5環境部署Hadoop2.8.1叢集(HA)

CentOS6.5環境部署Hadoop2.8.1叢集(HA)

部署前提
1、三臺主機安裝JDK1.7
2、關閉防火牆、selinux
3、配置靜態IP

一、所用軟體
hadoop-2.8.1
zookeeper-3.4.6

二、機器規劃(對應關係寫入到/etc/hosts)

IP HOST Install Soft Daemon
192.168.95.10 hadoop000 Hadoop、zookeeper NameNode、 DFSZKFailoverController、 JournalNode、 DataNode 、ResourceManager 、JobHistoryServer、 NodeManager、 QuorumPeerMain
192.168.95.20 hadoop001 Hadoop、zookeeper NameNode、DFSZKFailoverController、JournalNode、DataNode、ResourceManager、NodeManager、QuorumPeerMain
192.168.95.30 hadoop002 Hadoop、zookeeper JournalNode、DataNode、NodeManager、QuorumPeerMain

三、建立目錄並上傳Hadoop、zookeeper
/opt/目錄下建立soft資料夾(三臺)

mkdir
/opt/soft

上傳hadoop-2.8.1、zookeeper-3.4.6 至hadoop000機器/opt/soft目錄,解壓

 tar -zxvf zookeeper-3.4.6.tar.gz
 tar -zxvf hadoop-2.8.1.tar.gz

重新命名

 mv hadoop-2.8.1 hadoop
 mv zookeeper-3.4.6 zookeeper

檢視

[[email protected] soft]# ls
hadoop  hadoop-2.8.1.tar.gz  zookeeper  zookeeper-3.4.6.tar.gz

四、三臺主機SSH互相信任關係配置

五、部署zookeeper
1、設定zookeeper環境變數

vim /etc/profile
export ZOOKEEPER_HOME=/opt/soft/zookeeper
export PATH=$ZOOKEEPER_HOME/bin:$PATH
生效環境變數
source /etc/profile
驗證
[root@hadoop000 zookeeper]# echo $ZOOKEEPER_HOME
/opt/soft/zookeeper

2、進入/opt/soft/zookeeper/conf目錄
將目錄下預設檔案zoo_sample.cfg拷貝一份並命令成zoo.cfg

[root@hadoop000 conf]# cp zoo_sample.cfg zoo.cfg

3、編輯 zoo.cfg檔案新增以下內容

dataDir=/opt/soft/zookeeper/data
server.1=hadoop000:2888:3888
server.2=hadoop001:2888:3888
server.3=hadoop002:2888:3888

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.

dataDir=/opt/soft/zookeeper/data

# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1


server.1=hadoop000:2888:3888
server.2=hadoop001:2888:3888
server.3=hadoop002:2888:3888

4、進入zookeeper目錄

[root@hadoop000 zookeeper]# mkdir data
[root@hadoop000 zookeeper]# touch data/myid 
[root@hadoop000 zookeeper]# echo 1 > data/myid 

5、將hadoop000的zookeeper傳到hadoop001/002/opt/soft/目錄

[root@hadoop000 soft]# scp -r zookeeper 192.168.95.20:/opt/soft/ 
[root@hadoop000 soft]# scp -r zookeeper 192.168.95.30:/opt/soft/  

hadoop001/002做如下操作
[root@hadoop001 zookeeper]# echo 2 > data/myid
[root@hadoop002 zookeeper]# echo 3 > data/myid 

六、部署hadoop
1、部署hadoop環境變數,檢視hadoop目錄

[root@hadoop000 hadoop]# pwd
/opt/soft/hadoop

編輯/etc/profile,新增以下內容

export HADOOP_HOME=/opt/soft/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

驗證

[root@hadoop000 hadoop]# echo $HADOOP_HOME
/opt/soft/hadoop

2、編輯hadoop-env.sh
進入/opt/soft/hadoop/etc/hadoop目錄,在hadoop-env.sh新增以下內容(25行開始)

25 export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera
26 export  HADOOP_OPTS="$HADOOP_OPTS  -Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/bin/native"

3、修改core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml四個檔案

將/opt/soft/hadoop/etc/hadoop目錄下原有四個檔案刪除,將配置好的四個檔案放到這個目錄即可
注意:檔案有些配置內容以自己機器配置為準!

4、建立臨時資料夾和分發資料夾

[root@hadoop000 hadoop]# mkdir -p /opt/soft/hadoop/tmp
[root@hadoop000 hadoop]# chmod -R 777 /opt/soft/hadoop/tmp
[root@hadoop000 hadoop]# chown -R root:root /opt/soft/hadoop/tmp
[root@hadoop000 hadoop]# scp -r hadoop root@hadoop001:/opt/soft
[root@hadoop000 hadoop]# scp -r hadoop root@hadoop002:/opt/soft

七、啟動叢集
1、啟動zookeeper

[root@hadoop000 zookeeper]# $ZOOKEEPER_HOME/bin/zkServer.sh start
[root@hadoop001 zookeeper]# $ZOOKEEPER_HOME/bin/zkServer.sh start
[root@hadoop002 zookeeper]# $ZOOKEEPER_HOME/bin/zkServer.sh start

2、啟動journalnode程序

[root@hadoop000 ~]# cd /opt/soft/hadoop/sbin
[root@hadoop000 sbin]# hadoop-daemon.sh start journalnode

[root@hadoop001 ~]# cd /opt/soft/hadoop/sbin
[root@hadoop001 sbin]# hadoop-daemon.sh start journalnode

[root@hadoop002 ~]# cd /opt/soft/hadoop/sbin
[root@hadoop002 sbin]# hadoop-daemon.sh start journalnode

3、Namenode格式化
[[email protected] hadoop]# hadoop namenode -format
……………..
……………..
17/09/02 23:16:50 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1577237506-192.168.137.130-1504365410166
17/09/02 23:16:50 INFO common.Storage: Storage directory /opt/soft/hadoop/data/dfs/name has been successfully formatted.
17/09/02 23:16:50 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/soft/hadoop/data/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/09/02 23:16:50 INFO namenode.FSImageFormatProtobuf: Image file /opt/soft/hadoop/data/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 306 bytes saved in 0 seconds.
17/09/02 23:16:51 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/09/02 23:16:51 INFO util.ExitUtil: Exiting with status 0
17/09/02 23:16:51 INFO namenode.NameNode: SHUTDOWN_MSG:
/**************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop001/192.168.95.10

同步hadoop000元資料到hadoop001(Namenode standby)

[root@hadoop000 hadoop]# scp -r data/ root@hadoop001:/opt/soft/hadoop

4、zookeeper初始化
[[email protected] bin]# hdfs zkfc -formatZK
……………..
……………..
17/09/02 23:19:13 INFO ha.ActiveStandbyElector: Session connected.
17/09/02 23:19:13 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
17/09/02 23:19:13 INFO zookeeper.ZooKeeper: Session: 0x35e42f121f50000 closed
17/09/02 23:19:13 INFO zookeeper.ClientCnxn: EventThread shut down
17/09/02 23:19:13 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG:
/**************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at hadoop001/192.168.95.10
**************************************************/

5、啟動HDFS

[[email protected] sbin]# start-dfs.sh
Starting namenodes on [hadoop000 hadoop001]
hadoop000: starting namenode, logging to /opt/soft/hadoop/logs/hadoop-root-namenode-hadoop000.out
hadoop001: starting namenode, logging to /opt/soft/hadoop/logs/hadoop-root-namenode-hadoop001.out
hadoop001: starting datanode, logging to /opt/soft/hadoop/logs/hadoop-root-datanode-hadoop001.out
hadoop000: starting datanode, logging to /opt/soft/hadoop/logs/hadoop-root-datanode-hadoop000.out
hadoop002: starting datanode, logging to /opt/soft/hadoop/logs/hadoop-root-datanode-hadoop002.out
Starting journal nodes [hadoop000 hadoop001 hadoop002]
hadoop000: starting journalnode, logging to /opt/soft/hadoop/logs/hadoop-root-journalnode-hadoop000.out
hadoop001: starting journalnode, logging to /opt/soft/hadoop/logs/hadoop-root-journalnode-hadoop001.out
hadoop002: starting journalnode, logging to /opt/soft/hadoop/logs/hadoop-root-journalnode-hadoop002.out
Starting ZK Failover Controllers on NN hosts [hadoop000 hadoop001]
hadoop001: starting zkfc, logging to /opt/soft/hadoop/logs/hadoop-root-zkfc-hadoop001.out
hadoop000: starting zkfc, logging to /opt/soft/hadoop/logs/hadoop-root-zkfc-hadoop000.out

6、啟動YARN

[[email protected] sbin]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/software/hadoop/logs/yarn-root-resourcemanager-hadoop000.out
hadoop002: starting nodemanager, logging to /opt/software/hadoop/logs/yarn-root-nodemanager-hadoop002.out
hadoop001: starting nodemanager, logging to /opt/software/hadoop/logs/yarn-root-nodemanager-hadoop001.out
hadoop000: starting nodemanager, logging to /opt/software/hadoop/logs/yarn-root-nodemanager-hadoop000.out

7、備機啟動Resourcemanager

[[email protected] hadoop]# yarn-daemon.sh start resourcemanager

starting resourcemanager, logging to /opt/soft/hadoop/logs/yarn-root-resourcemanager-hadoop002.out

8、檢視程序

[root@hadoop000 ~]# jps
2488 JournalNode
2641 DFSZKFailoverController
2808 ResourceManager
2280 DataNode
5395 Jps
2179 NameNode
2910 NodeManager
2030 QuorumPeerMain
[root@hadoop001 ~]# jps
1927 QuorumPeerMain
2583 ResourceManager
2403 NodeManager
6255 Jps
2262 DFSZKFailoverController
2173 JournalNode
1998 NameNode
2063 DataNode
[root@hadoop002 ~]# jps
2049 NodeManager
2678 Jps
1921 JournalNode
1851 DataNode
1786 QuorumPeerMain

八、驗證HDFS/YARN的HA
hadoop000 (Namenode active)

這裡寫圖片描述

hadoop001 (Namenode standby)

相關推薦

CentOS6.5環境部署Hadoop2.8.1叢集HA

部署前提 1、三臺主機安裝JDK1.7 2、關閉防火牆、selinux 3、配置靜態IP 一、所用軟體 hadoop-2.8.1 zookeeper-3.4.6 二、機器規劃(對應關係寫入到/etc/hosts) IP HOS

阿里雲伺服器購買配置、環境部署、搭建網站教程轉載

阿里雲伺服器購買怎麼選擇合適自己需求配置?如何安裝伺服器環境來搭建網站呢?很多沒有云計算基礎的小白在ecs伺服器配置上都會遇到各種問題,今天詳細的寫一篇阿里雲伺服器配置教程文章,手把手教導大家如何配置! 購買阿里雲伺服器或者其它任何產品,記得先領取阿里雲代金券

Hadoop2.7.1 叢集部署及自動化指令碼

                     實驗環境作業系統:ubuntu 14.04 64位                    主機名IPnamenode10.107.12.10datanode110.107.12.20datanode210.107.12.50datanode310.107.12.60j

Linux下Hadoop2.7.1叢集環境的搭建超詳細版

1 <?xml version="1.0"?> 2 <!-- 3 Licensed under the Apache License, Version 2.0 (the "License"); 4 you may not use this file except in c

Hadoop2.8.1完全分散式環境搭建

前言 本文搭建了一個由三節點(master、slave1、slave2)構成的Hadoop完全分散式叢集(區別單節點偽分散式叢集),並通過Hadoop分散式計算的一個示例測試叢集的正確性。 本文叢集三個節點基於三臺虛擬機器進行搭建,節點安裝的作業系統為Centos7(yum源),Hadoop版本選取為2.8

centos6.5部署Zeppelin並配置賬號密碼驗證

oop nbsp 開啟 art 變量 jdk 1.7 技術 apache 使用 centos6.5中部署Zeppelin並配置賬號密碼驗證1.安裝JavaZeppelin支持的操作系統如下圖所示。在安裝Zeppelin之前,你需要在部署的服務器上安裝Oracle JDK 1

centos7和centos6.5環境rpm方式安裝mysql5.7和mysql5.6詳解

安裝mysql5.7和mysql5.6詳centos7和centos6.5環境rpm方式安裝mysql5.7和mysql5.6詳解centos環境安裝mysql5.7其實不建議安裝mysql5.7 語法和配置可能和以前的版本區別較大,多坑,慎入1.yum方式安裝(不推薦)a.安裝mysql5.7 yum源ce

centos6.5環境下的web項目mysql編碼方式導致的中文亂碼問題

efault filesyste vim 命令 client 編碼方式 mysql編碼 safe files 最近在centos6.5下部署web項目時網頁出現中文亂碼的問題,在排除掉php之後,把問題鎖定在mysql的編碼方式上。 解決方法如下: 首先進入mysql命

Elam的caffe筆記之配置篇CentOS6.5編譯安裝gcc4.8.2

Elam的caffe筆記之配置篇(一):CentOS6.5編譯安裝gcc4.8.2 配置要求: 系統:centos6.5 目標:基於CUDA8.0+Opencv3.1+Cudnnv5.1+python3.6介面的caffe框架 任何對linux處於入門級別的小白都應

部署Hadoop2.0高效能叢集

廢話不多說直接實戰,部署Hadoop高效能叢集: 拓撲圖: 一、實驗前期環境準備: 1、三臺主機配置hosts檔案:(複製到另外兩臺主機上) [[email protected] ~]# more /etc/hosts   192.168.199.3 tiandong

基於CentOS6.4環境編譯Spark-2.1.0原始碼

基於CentOS6.4環境編譯Spark-2.1.0原始碼   1 寫在前面的話 有些小夥伴可能會問:Spark官網不是已經提供了Spark針對不同版本的安裝包了嗎,我們為什麼還需要對Spark原始碼進行編譯呢?針對這個問題我們到Spark官網: spark.a

centos6.5安裝部署rabbitmq

rabbitmq安裝只需要兩個安裝包,1個是erlang環境erlang-19.0.4-1.el6.x86_64.rpm,另一個是rabbitmq-server-3.5.6-1.noarch.rpm

centos6.5環境使用RPM包離線安裝MariaDB 10.0.20

1. 進入MariaDB官網下載MariaDB需要的RPM包 2. 使用下載軟體下載所需要的RPM包, 總共4個, 並上傳到CentOS主機指定目錄. MariaDB-10.0.20-centos6-x86_64-server.rpm MariaDB-10.0.20-cen

Hadoop 偽分散式環境搭建——hadoop2.8+centos7零基礎&完整版

引言: 環境: 一、安裝虛擬機器 在windows系統中安裝VMware14pro,直接下載安裝,無需贅述 ps:如有條件,請購買使用 二、安裝linux作業系統 CentOS 是一個基於Red Hat Linux 提供的可自由使用

CentOS6上編譯安裝GDB 8.1版本程記錄包括安裝peda

下載 我是在CentOS6下首先將gcc升級到8.1後才著手升級gdb的。如果直接採用舊版本的gcc編譯,可能會由於原始碼中出現某些使用了新特性的情況導致中途報錯。升級gcc的過程可以參考我的上一篇文章CentOS6上編譯安裝gcc8.1版本全過程記錄(包括排

hadoop2.8+centos7叢集搭建

本叢集搭建於以下軟體:VMware Workstation12 Pro SecureCRT 7.3 Xftp 5 CentOS-7-x86_64-Everything-1611.iso hadoop-2.8.0.tar.gz jdk-8u121-linux-x64.

centos6.5環境下sysbench編譯安裝

今天繼續昨天sysbench編輯失敗的嘗試,終於在同學們的提示下搞定了sysbench的編譯安裝 1:下載sysbench: http://imysql.com/wp-content/uploads/2014/09/sysbench-0.4.12-1.1.tgz     

CentOS6.5 下安裝QT5.8

因為公司用的是CentOS6.5這個Linux版的執行環境,最近有一個新專案可能用到最新版QT5.8版本,所以嘗試在這個版本的Linux下安裝,但是安裝過程中出現錯誤,所以百度各種錯誤,排錯。 可能是

centos6.5環境下搭建多版本python(python2.6、python2.7、python3.5)共存環境

可能存在的問題 yum安裝、原始碼安裝、二進位制安裝用哪個,官網文件是原始碼安裝,所以咱們就用原始碼安裝 在原始碼安裝的時候會有什麼問題 一個是預設路徑的問題,在編譯的時候時候如果不指定路徑的話,很多二進位制檔案會安裝到預設的目錄下/usr/bin下面,系

CENTOS6.5環境下POSTGRESQL的安裝與配置總結以及遠端連線問題的解決

  最近裝了linux系統CentOs6.5,由於專案中要用到PostgreSQL,所以裝了9.2的版本。由於 接觸linux系統很少,所以開始的時候很費力。現在,把這兩天的工作總結如下: 一:PostgreSQL9.2的安裝   安裝的時候我參考了園子裡的大神的帖子,特髮網址如下:http://www.c