1. 程式人生 > >【十五】Spark Streaming整合Kafka使用Direct方式(使用Scala語言)

【十五】Spark Streaming整合Kafka使用Direct方式(使用Scala語言)

官網介紹

Kafka提供了新的consumer api 在0.8版本和0.10版本之間。0.8的整合是相容0.9和0.10的。但是0.10的整合不相容以前的版本。

這裡使用的整合是spark-streaming-kafka-0-8。官方文件

配置SparkStreaming接收從kafka來的資料有兩種方式。老的方式要使用Receiver,新的方式是Spark1.3後引進的不用Receiver。

Approach 1: Receiver-based Approach

Approach 2: Direct Approach (No Receivers)

這裡介紹第二種使用Direct的方式。

這是一種新的模式,在Spark1.3中引進的,它有更加強壯的端到端的資料保障。它代替了使用Receiver接收資料。

它週期性的去查詢Kafka每一個topic partition最新的偏移量,通過每一個批次處理偏移範圍。

當啟動Job去處理資料以後,Kafka'simple consumer API 從Kafka中去讀偏移量的範圍(和讀檔案系統很類似)。

這個新特性在Spark1.3中支援Scala和Java,在1.4中可以支援Python。

這種方式和Receiver方式對比的優點:

1.簡化了並行度。不需要建立多個input Kafka streams再粘合起來。而是直接使用directStream來處理。Spark Streaming將建立多個RDD partitions對接到Kafka paritions去消費,這樣從Kafka中讀取的資料就是並行的。在Kafka和RDD partitions之間是一對一的對映。

2.效能更高。能夠達到零資料丟失。然而在Receiver方式中需要把資料寫到WAL( Write Ahead Log)中才能零資料丟失。

3.能夠滿足“只執行一次“不重複消費。在Receiver方式中要使用Kafka的高階API,去儲存消費偏移量在zookeeper中,這是一種傳統的消費Kafka的資料方式。

而Direct方式也有一種缺點,不能更新zookeeper中的偏移量,所以基於zookeeper的kafka監控工具就沒辦法展示處理。每一個批次都需要自己把偏移量更新到zookeeper中去。

實戰

1.啟動zk

cd /app/zookeeper/bin

./zkServer.sh start

2.啟動kafka

cd /app/kafka

bin/kafka-server-start.sh -daemon config/server.properties &

3.建立topic

bin/kafka-topics.sh --create --zookeeper node1:2181 --replication-factor 1 --partitions 1 --topic spark_topic

4.控制檯測試topic是否能夠正常生成和消費資訊

傳送訊息

bin/kafka-console-producer.sh --broker-list node1:9092 --topic spark_topic

hello kafka

hello spark streaming

9092是server.properties中配置的監聽埠

消費訊息

bin/kafka-console-consumer.sh --zookeeper node1:2181 --topic spark_topic

5.專案目錄

6.pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.sid.spark</groupId>
  <artifactId>spark-train</artifactId>
  <version>1.0</version>
  <inceptionYear>2008</inceptionYear>
  <properties>
    <scala.version>2.11.8</scala.version>
    <kafka.version>0.8.2.1</kafka.version>
    <spark.version>2.2.0</spark.version>
    <hadoop.version>2.9.0</hadoop.version>
    <hbase.version>1.4.4</hbase.version>
  </properties>

  <repositories>
    <repository>
      <id>scala-tools.org</id>
      <name>Scala-Tools Maven2 Repository</name>
      <url>http://scala-tools.org/repo-releases</url>
    </repository>
  </repositories>

  <pluginRepositories>
    <pluginRepository>
      <id>scala-tools.org</id>
      <name>Scala-Tools Maven2 Repository</name>
      <url>http://scala-tools.org/repo-releases</url>
    </pluginRepository>
  </pluginRepositories>

  <dependencies>
    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-library</artifactId>
      <version>${scala.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka_2.11</artifactId>
      <version>${kafka.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>${hadoop.version}</version>
      <exclusions>
        <exclusion>
          <artifactId>servlet-api</artifactId>
          <groupId>javax.servlet</groupId>
        </exclusion>
      </exclusions>
    </dependency>

    <!--<dependency>-->
      <!--<groupId>org.apache.hbase</groupId>-->
      <!--<artifactId>hbase-clinet</artifactId>-->
      <!--<version>${hbase.version}</version>-->
    <!--</dependency>-->

    <!--<dependency>-->
      <!--<groupId>org.apache.hbase</groupId>-->
      <!--<artifactId>hbase-server</artifactId>-->
      <!--<version>${hbase.version}</version>-->
    <!--</dependency>-->

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-flume_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-flume-sink_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>



    <dependency>
      <groupId>net.jpountz.lz4</groupId>
      <artifactId>lz4</artifactId>
      <version>1.3.0</version>
    </dependency>

    <dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>5.1.31</version>
    </dependency>

    <dependency>
      <groupId>org.apache.commons</groupId>
      <artifactId>commons-lang3</artifactId>
      <version>3.5</version>
    </dependency>

  </dependencies>

  <build>
    <sourceDirectory>src/main/scala</sourceDirectory>
    <testSourceDirectory>src/test/scala</testSourceDirectory>
    <plugins>
      <plugin>
        <groupId>org.scala-tools</groupId>
        <artifactId>maven-scala-plugin</artifactId>
        <executions>
          <execution>
            <goals>
              <goal>compile</goal>
              <goal>testCompile</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <scalaVersion>${scala.version}</scalaVersion>
          <args>
            <arg>-target:jvm-1.5</arg>
          </args>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-eclipse-plugin</artifactId>
        <configuration>
          <downloadSources>true</downloadSources>
          <buildcommands>
            <buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand>
          </buildcommands>
          <additionalProjectnatures>
            <projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature>
          </additionalProjectnatures>
          <classpathContainers>
            <classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
            <classpathContainer>ch.epfl.lamp.sdt.launching.SCALA_CONTAINER</classpathContainer>
          </classpathContainers>
        </configuration>
      </plugin>
    </plugins>
  </build>
  <reporting>
    <plugins>
      <plugin>
        <groupId>org.scala-tools</groupId>
        <artifactId>maven-scala-plugin</artifactId>
        <configuration>
          <scalaVersion>${scala.version}</scalaVersion>
        </configuration>
      </plugin>
    </plugins>
  </reporting>
</project>

7.程式碼

package com.sid.spark

import kafka.serializer.StringDecoder
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

/**
  * Created by jy02268879 on 2018/7/19.
  *
  * Spark Streaming 基於 Direct 對接Kafka
  */
object KafkaDirect {
  def main(args: Array[String]): Unit = {

    if(args.length != 2){
      System.err.println("Usage: KafkaDirect <brokers> <topics>")
      System.exit(1)
    }

    val Array(brokers,topics) = args

    val sparkConf = new SparkConf().setAppName("KafkaReceiver").setMaster("local[3]")
    val ssc = new StreamingContext(sparkConf,Seconds(5))

    /**
      * @param ssc StreamingContext object
      * @param kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration">
      *   configuration parameters</a>. Requires "metadata.broker.list" or "bootstrap.servers"
      *   to be set with Kafka broker(s) (NOT zookeeper servers), specified in
      *   host1:port1,host2:port2 form.
      *   If not starting from a checkpoint, "auto.offset.reset" may be set to "largest" or "smallest"
      *   to determine where the stream starts (defaults to "largest")
      * @param topics Names of the topics to consume
      * @tparam K type of Kafka message key
      * @tparam V type of Kafka message value
      * @tparam KD type of Kafka message key decoder
      * @tparam VD type of Kafka message value decoder
      * @return DStream of (Kafka message key, Kafka message value)
      */
      val topicsSet = topics.split(",").toSet
    val kafkaParams = Map[String,String]("metadata.broker.list"-> brokers)
    val messages= KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](
      ssc,kafkaParams,topicsSet
    )

    messages.print()
    messages.map(_._2).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()

    ssc.start()
    ssc.awaitTermination()
  }
}

8.執行程式碼

9.在Kafka生成資料 a a a b b c c 

10.IDEA檢視結果

本地執行成功後測試提交到伺服器上執行

修改程式碼註釋掉setAppName和setMaster

maven打包

把target生成的jar包傳到spark伺服器上去

執行

cd /app/spark/spark-2.2.0-bin-2.9.0/bin

./spark-submit --class com.sid.spark.KafkaDirect --master local[2] --name KafkaDirect --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.2.0 /app/spark/test_data/spark-train-1.0-SNAPSHOT.jar node1:9092 spark_topic

UI