1. 程式人生 > >spark 2.2.0 高可用搭建

spark 2.2.0 高可用搭建

spark

一、概述

1.實驗環境基於以前搭建的haoop HA;

2.spark HA所需要的zookeeper環境前文已經配置過,此處不再重復。

3.所需軟件包為:scala-2.12.3.tgz、spark-2.2.0-bin-hadoop2.7.tar

4.主機規劃

bd1

bd2

bd3

Worker

bd4

bd5


Master、Worker

二、配置Scala

1.解壓並拷貝

[[email protected] ~]# tar -zxf scala-2.12.3.tgz 
[[email protected] ~]# cp -r scala-2.12.3 /usr/local/

2.配置環境變量

[[email protected] ~]# vim /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=:$SCALA_HOME/bin:$PATH
[[email protected] ~]# source /etc/profile

3.驗證

[[email protected] ~]# scala -version
Scala code runner version 2.12.3 -- Copyright 2002-2017, LAMP/EPFL and Lightbend, Inc.

三、配置Spark

1.解壓並拷貝

[[email protected] ~]# tar -zxf spark-2.2.0-bin-hadoop2.7.tgz
[[email protected] ~]# cp spark-2.2.0-bin-hadoop2.7 /usr/local/spark

2.配置環境變量

[[email protected] ~]# vim /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=:$SCALA_HOME/bin:$PATH
[[email protected] ~]# source /etc/profile

3.修改spark-env.sh #文件不存在需要拷貝模板

[[email protected] conf]# vim spark-env.sh
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SCALA_HOME=/usr/local/scala
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=bd4:2181,bd5:2181 -Dspark.deploy.zookeeper.dir=/spark"
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1

4.修改spark-defaults.conf #文件不存在需要拷貝模板

[[email protected] conf]# vim spark-defaults.conf
spark.master                     spark://master:7077
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://master:/user/spark/history
spark.serializer                 org.apache.spark.serializer.KryoSerializer

5.在HDFS文件系統中新建日誌文件目錄

hdfs dfs -mkdir -p /user/spark/history
hdfs dfs -chmod 777 /user/spark/history

6.修改slaves

[[email protected] conf]# vim slaves
bd1
bd2
bd3
bd4
bd5

四、同步到其他主機

1.使用scp同步Scala到bd2-bd5

scp -r /usr/local/scala [email protected]:/usr/local/
scp -r /usr/local/scala [email protected]:/usr/local/
scp -r /usr/local/scala [email protected]:/usr/local/
scp -r /usr/local/scala [email protected]:/usr/local/

2.同步Spark到bd2-bd5

scp -r /usr/local/spark [email protected]:/usr/local/
scp -r /usr/local/spark [email protected]:/usr/local/
scp -r /usr/local/spark [email protected]:/usr/local/
scp -r /usr/local/spark [email protected]:/usr/local/

五、啟動集群並測試HA

1.啟動順序為:zookeeper-->hadoop-->spark

2.啟動spark

bd4:

[[email protected] sbin]# cd /usr/local/spark/sbin/
[[email protected] sbin]# ./start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd4.out
bd4: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd4.out
bd2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd2.out
bd3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd3.out
bd5: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd5.out
bd1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd1.out

[[email protected] sbin]# jps
3153 DataNode
7235 Jps
3046 JournalNode
7017 Master
3290 NodeManager
7116 Worker
2958 QuorumPeerMain

bd5:

[[email protected] sbin]# ./start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd5.out

[[email protected] sbin]# jps
3584 NodeManager
5602 RunJar
3251 QuorumPeerMain
8564 Master
3447 DataNode
8649 Jps
8474 Worker
3340 JournalNode

技術分享

技術分享

3.停掉bd4的Master進程

[[email protected] sbin]# kill -9 7017
[[email protected] sbin]# jps
3153 DataNode
7282 Jps
3046 JournalNode
3290 NodeManager
7116 Worker
2958 QuorumPeerMain

技術分享

技術分享

五、總結

一開始時想把Master放到bd1和bd2上,但是啟動Spark後發現兩個節點上都是Standby。然後修改配置文件轉移到bd4和bd5上,才順利運行。換言之Spark HA的Master必須位於Zookeeper集群上才能正常運行,即該節點上要有JournalNode這個進程。

本文出自 “lullaby” 博客,請務必保留此出處http://lullaby.blog.51cto.com/10815696/1972098

spark 2.2.0 高可用搭建