1. 程式人生 > >spark sql 訪問Hive資料表

spark sql 訪問Hive資料表

測試環境

hadoop版本:2.6.5
spark版本:2.3.0
hive版本:1.2.2
master主機:192.168.11.170
slave1主機:192.168.11.171

程式碼實現

針對Hive表的sql語句會轉化為MR程式,一般執行起來會比較耗時,spark sql也提供了對Hive表的支援,同時還可以降低執行時間。

1.建立idea工程

pom.xml依賴如下:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>

  <groupId>com.tongfang.learn</groupId>
  <artifactId>learn</artifactId>
  <version>1.0-SNAPSHOT</version>
  <name>learn</name>
  <!-- FIXME change it to the project's website -->
  <url>http://www.example.com</url>
  
  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <maven.compiler.source>1.8</maven.compiler.source>
    <maven.compiler.target>1.8</maven.compiler.target>
    <spark.core.version>2.3.0</spark.core.version>
  </properties>

  <dependencies>
    <dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.11</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-core_2.11</artifactId>
      <version>${spark.core.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.11</artifactId>
      <version>${spark.core.version}</version>
    </dependency>
    <dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>5.1.38</version>
    </dependency>
	<dependency>
	  <groupId>org.apache.spark</groupId>
	  <artifactId>spark-hive_2.11</artifactId>
	  <version>2.3.0</version>
	</dependency>
  </dependencies>
</project>

同時將hive-site.xml配置檔案放到工程resources目錄下,hive-site.xml配置如下:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <!-- hive元資料服務url -->
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://192.168.11.170:9083</value>
  </property>
  <property>
    <name>hive.server2.thrift.port</name>
    <value>10000</value>
  </property>
  <!-- hive元資料庫訪問url -->
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://192.168.11.170:3306/hive?createDatabaseIfNoExist=true&amp;characterEncoding=utf8&amp;useSSL=true&amp;useUnicode=true&amp;serverTimezone=UTC</value>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
  </property>
  <!-- hive元資料庫使用者名稱 -->
  <property> 
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
  </property>
   <!-- hive元資料庫訪問密碼 -->
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>chenliabc</value>
  </property>
  <!-- hive在hdfs上的儲存路徑 -->
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
  </property>
  <!-- 叢集hdfs訪問url -->
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://192.168.11.170:9000</value>
  </property>
  <property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
  </property>
  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>true</value>
  </property>
  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>checked</value>
  </property>
</configuration>

例項程式碼:

import org.apache.spark.sql.SparkSession;

public class HiveTest {

    public static void main(String[] args) {

        SparkSession spark = SparkSession
                .builder()
                .appName("Java Spark Hive Example")
                .enableHiveSupport()
                .getOrCreate();
        spark.sql("create table if not exists person(id int,name string, address string) row format delimited fields terminated by '|' stored as textfile");
        spark.sql("show tables").show();
        spark.sql("load data local inpath '/home/hadoop/software/person.txt' overwrite into table person");
        spark.sql("select * from person").show();
    }
}

person.txt如下:

1|tom|beijing
2|allen|shanghai
3|lucy|chengdu

2.打包執行

在執行前需要確保hadoop叢集正確啟動,同時需要啟動hive metastore服務。

./bin/hive --service metastore 

提交spark任務:

spark-submit  --class com.tongfang.learn.spark.hive.HiveTest --master yarn learn.jar  

執行結果:
在這裡插入圖片描述
在這裡插入圖片描述

當然也可以直接在idea中直接執行,程式碼需要細微調整:

public class HiveTest {

    public static void main(String[] args) {

        SparkSession spark = SparkSession
                .builder()
                .master("local[*]")
                .appName("Java Spark Hive Example")
                .enableHiveSupport()
                .getOrCreate();
        spark.sql("create table if not exists person(id int,name string, address string) row format delimited fields terminated by '|' stored as textfile");
        spark.sql("show tables").show();
        spark.sql("load data local inpath 'src/main/resources/person.txt' overwrite into table person");
        spark.sql("select * from person").show();
    }
}

在執行中可能報以下錯:

Exception in thread "main" org.apache.spark.sql.AnalysisException: java.lang.RuntimeException: java.io.IOException: (null) entry in command string: null chmod 0700 C:\Users\dell\AppData\Local\Temp\c530fb25-b267-4dd2-b24d-741727a6fbf3_resources;
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
	at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
	at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
	at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
	at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)
	at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)
	at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)
	at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anon$1.<init>(HiveSessionStateBuilder.scala:69)
	at org.apache.spark.sql.hive.HiveSessionStateBuilder.analyzer(HiveSessionStateBuilder.scala:69)
	at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
	at org.apache.spark.sql.internal.BaseSessionStateBuilder$$anonfun$build$2.apply(BaseSessionStateBuilder.scala:293)
	at org.apache.spark.sql.internal.SessionState.analyzer$lzycompute(SessionState.scala:79)
	at org.apache.spark.sql.internal.SessionState.analyzer(SessionState.scala:79)
	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:74)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
	at com.tongfang.learn.spark.hive.HiveTest.main(HiveTest.java:15)

解決方案:
1.下載hadoop windows binary包,點選這裡
2.在啟動類的執行引數中設定環境變數,HADOOP_HOME=D:\winutils\hadoop-2.6.4,後面是hadoop windows 二進位制包的目錄。
在這裡插入圖片描述

執行結果:
在這裡插入圖片描述
在這裡插入圖片描述

總結

本文講解了spark-sql訪問Hive表的程式碼實現與兩種執行方式。