Spark 2.4.0 整合Hive 1.2.1
阿新 • • 發佈:2018-12-26
Spark 2.4.0 整合Hive 1.2.1
更多資源
- github: https://github.com/opensourceteams/spark-scala-maven-2.4.0
- apache-hive-1.2.1-bin 安裝: https://github.com/opensourceteams/apache-hive-1.2.1-bin
官網文件
技能標籤
- spark-2.4.0-bin-hadoop2.7.tgz 整合Hive 1.2.1
- 學會安裝Spark 2.4.0 standalone模式環境安裝
- Spark 叢集環境maste,worker,history server 啟動停止命令
- Spark master,worker,history server 配置和管理介面檢視
- Spark shell 終端執行互動式命令,Spark shell 作業監控
- WorldCount案例執行,介面檢視
- Spark master,worker,history,executor 日誌檢視
- 官網: http://spark.apache.org/docs/latest/spark-standalone.html
前置條件
- 已安裝好java(選用的是java 1.8.0_191)
- 已安裝好scala(選用的是scala 2.11.121)
- 已安裝好hadoop(選用的是Hadoop-2.9.2)
- hadoop,hive 版本選擇,根據spark預設捆綁hive1.2.1版本,不支援hadoop 3.0 以上的版本,所以選的 Hadoop 是3.0以下的版本,這樣不需要重新編譯Spark,當然,可以手動編譯Spark,這樣就可以對Hive,Hadoop,自行選擇
安裝
- 下載安裝包 : spark-2.4.0-bin-hadoop2.7.tgz
- 安裝包下載地址:
- 將安裝包上傳到伺服器上進行安裝
- 解壓壓縮包
tar -zxvf spark-2.4.0-bin-hadoop2.7.tgz -C /opt/module/bigdata/
- 配置環境變數(配的是~/.bashrc)
export JAVA_HOME=/opt/module/jdk/jdk1.8.0_191
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export SCALA_HOME=/opt/module/scala/scala-2.11.12
export HADOOP_HOME=/opt/module/bigdata/hadoop-2.9.2
export SPARK_HOME=/opt/module/bigdata/spark-2.4.0-bin-hadoop2.7
export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
配置
配置hadoop的classpath
- 下載不帶hadoop依賴jar的spark版本
- 需要在spark配置中指定hadoop的classpath
- 配置檔案spark-env.sh
### in conf/spark-env.sh ###
# If 'hadoop' binary is on your PATH
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
# With explicit path to 'hadoop' binary
export SPARK_DIST_CLASSPATH=$(/path/to/hadoop/bin/hadoop classpath)
# Passing a Hadoop configuration directory
export SPARK_DIST_CLASSPATH=$(hadoop --config /path/to/configs classpath)
spark-env.sh配置
#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site.
# Options read when launching programs locally with
# ./bin/run-example or ./bin/spark-submit
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program
# Options read by executors and drivers running inside the cluster
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program
# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data
# - MESOS_NATIVE_JAVA_LIBRARY, to point to your libmesos.so if you use Mesos
# Options read in YARN client/cluster mode
# - SPARK_CONF_DIR, Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN
# - SPARK_EXECUTOR_CORES, Number of cores for the executors (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
# Options for the daemons used in the standalone deploy mode
# - SPARK_MASTER_HOST, to bind the master to a different IP address or hostname
# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master
# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")
# - SPARK_WORKER_CORES, to set the number of cores to use on this machine
# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)
# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_DAEMON_MEMORY, to allocate to the master, worker and history server themselves (default: 1g).
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_DAEMON_CLASSPATH, to set the classpath for all daemons
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers
# Generic options for the daemons used in the standalone deploy mode
# - SPARK_CONF_DIR Alternate conf dir. (Default: ${SPARK_HOME}/conf)
# - SPARK_LOG_DIR Where log files are stored. (Default: ${SPARK_HOME}/logs)
# - SPARK_PID_DIR Where the pid file is stored. (Default: /tmp)
# - SPARK_IDENT_STRING A string representing this instance of spark. (Default: $USER)
# - SPARK_NICENESS The scheduling priority for daemons. (Default: 0)
# - SPARK_NO_DAEMONIZE Run the proposed command in the foreground. It will not output a PID file.
# Options for native BLAS, like Intel MKL, OpenBLAS, and so on.
# You might get better performance to enable these options if using native BLAS (see SPARK-21305).
# - MKL_NUM_THREADS=1 Disable multi-threading of Intel MKL
# - OPENBLAS_NUM_THREADS=1 Disable multi-threading of OpenBLAS
# Passing a Hadoop configuration directory
export SPARK_DIST_CLASSPATH=$(hadoop classpath)
#master setting
SPARK_MASTER_HOST=standalone.com #繫結master的主機域名
SPARK_MASTER_PORT=7077 # master 通訊埠,worker和master通訊埠
SPARK_MASTER_WEBUI_PORT=8080 # master SParkUI用的埠
# worker setting
SPARK_WORKER_MEMORY=1g #配置worker的記憶體大小
slaves配置
- 配置路徑$SPARK_HOME/conf/slaves
standalone.com
history server 配置
- 配置 $SPARK_HOME/conf/spark-defaults.conf
- 注意spark.eventLog.dir 和spark.history.fs.logDirectory 要相同路徑
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Default system properties included when running spark-submit.
# This is useful for setting default environmental settings.
# Example:
# spark.master spark://master:7077
# spark.eventLog.enabled true
# spark.eventLog.dir hdfs://namenode:8021/directory
# spark.serializer org.apache.spark.serializer.KryoSerializer
# spark.driver.memory 5g
# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
# history
spark.master=spark://standalone.com:7077
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs://standalone.com:9000/spark/log/historyEventLog
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.driver.memory=1g
spark.history.fs.logDirectory=hdfs://standalone.com:9000/spark/log/historyEventLog
spark.history.ui.port=18080
spark.history.fs.update.interval=10s
# The number of application UIs to retain. If this cap is exceeded, then the oldest applications will be removed.
spark.history.retainedApplications=50
spark.history.fs.cleaner.enabled=false
spark.history.fs.cleaner.interval=1d
spark.history.fs.cleaner.maxAge=7d
spark.history.ui.acls.enable=false
spark 整合Hive 配置
- 把Hive配置檔案 hive-site.xml 得制到 $SPARK_HOME/conf/目錄下
- hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://macbookmysql.com:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.cj.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>admin</value>
<description>username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>000000</value>
<description>password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://standalone.com:9083</value>
<description>Thrift URI for remote metastore.Used by metastore client to connect to remote metastore</description>
</property>
</configuration>
配置mysql 驅動
- mysql驅動以mysql資料庫安裝的版本選用
cp mysql-connector-java-8.0.13.jar $SPARK_HOME/jars/
master
master啟動命令
- 預設master日誌路徑 $SPARK_HOME/logs/–org.apache.spark.deploy.master.Master–.out
start-master.sh
master停止命令
stop-master.sh
worker
master啟動命令
- 預設worker日誌路徑 $SPARK_HOME/logs/–org.apache.spark.deploy.master.Worker–.out
start-slave.sh spark://standalone.com:7077
worker停止命令
stop-slave.sh
啟動所有worker命令
- 配置 $SPARK_HOME/conf/slaves 所有worker配置
start-slaves.sh spark://standalone.com:7077
停止所有worker命令
stop-slaves.sh
啟動spark-shell命令
- 啟動spark-shell後可以在介面管理: http://standalone:4040
spark-shell
WorldCount 示例
val rdd = sc.textFile("/home/liuwen/data/a.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
rdd.saveAsTextFile("hdfs://standalone.com:9000/opt/temp/output_b_4")
啟動history-server命令
- history url: http://standalone.com:18080
start-history-server.sh
停上history-server 命令
stop-history-server.sh
管理控制檯
master控制檯
spark-shell命令
- 介面管理 : http://standalone:4040
- 終端互動介面
end