1. 程式人生 > >Spark Streaming整合flume(Poll方式和Push方式)

Spark Streaming整合flume(Poll方式和Push方式)

flume作為日誌實時採集的框架,可以與SparkStreaming實時處理框架進行對接,flume實時產生資料,sparkStreaming做實時處理。
Spark Streaming對接FlumeNG有兩種方式,一種是FlumeNG將訊息Push推給Spark Streaming,還有一種是Spark Streaming從flume 中Poll拉取資料。
1.1 Poll方式
(1)安裝flume1.6以上
(2)下載依賴包
spark-streaming-flume-sink_2.11-2.0.2.jar放入到flume的lib目錄下
(3)修改flume/lib下的scala依賴包版本
從spark安裝目錄的jars資料夾下找到scala-library-2.11.8.jar 包,替換掉flume的lib目錄下自帶的scala-library-2.10.1.jar。
(4)寫flume的agent,注意既然是拉取的方式,那麼flume向自己所在的機器上產資料就行
(5)編寫flume-poll.conf配置檔案

a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/data
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname=hdp-node-01
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000 
啟動指令碼程式碼
flume-ng agent -n a1 -c /opt/bigdata/flume/conf -f /opt/bigdata/flume/conf/flume-poll.conf -Dflume.root.logger=INFO,console

伺服器上的 /root/data目錄下準備資料檔案data.txt
在這裡插入圖片描述

(5)啟動spark-streaming應用程式,去flume所在機器拉取資料
(6)程式碼實現
需要新增pom依賴

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-flume_2.11</artifactId>
    <version>2.0.2</version>
</dependency>

具體程式碼如下:

package cn.itcast.Flume

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}

//todo:sparkStreaming整合flume----採用的是拉模式
object SparkStreamingPollFlume {
  def main(args: Array[String]): Unit = {
      //1、建立sparkConf
      val sparkConf: SparkConf = new SparkConf().setAppName("SparkStreamingPollFlume").setMaster("local[2]")
      //2、建立sparkContext
      val sc = new SparkContext(sparkConf)
      sc.setLogLevel("WARN")
     //3、建立streamingContext
      val ssc = new StreamingContext(sc,Seconds(5))
      ssc.checkpoint("./flume")
     //4、通過FlumeUtils呼叫createPollingStream方法獲取flume中的資料
     val pollingStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createPollingStream(ssc,"192.168.200.100",8888)
    //5、獲取flume中event的body {"headers":xxxxxx,"body":xxxxx}
     val data: DStream[String] = pollingStream.map(x=>new String(x.event.getBody.array()))
    //6、切分每一行,每個單詞計為1
    val wordAndOne: DStream[(String, Int)] = data.flatMap(_.split(" ")).map((_,1))
    //7、相同單詞出現的次數累加
      val result: DStream[(String, Int)] = wordAndOne.updateStateByKey(updateFunc)
    //8、列印輸出
      result.print()
    //9、開啟流式計算
      ssc.start()
      ssc.awaitTermination()

  }
  //currentValues:他表示在當前批次每個單詞出現的所有的1   (hadoop,1) (hadoop,1)(hadoop,1)
  //historyValues:他表示在之前所有批次中每個單詞出現的總次數   (hadoop,100)
  def updateFunc(currentValues:Seq[Int], historyValues:Option[Int]):Option[Int] = {
    val newValue: Int = currentValues.sum+historyValues.getOrElse(0)
    Some(newValue)
  }

}

(7)觀察IDEA控制檯輸出

1.2 Push方式
(1)編寫flume-push.conf配置檔案

#push mode
a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/data
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = avro
a1.sinks.k1.hostname=172.16.43.63
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000 

注意配置檔案中指明的hostname和port是spark應用程式所在伺服器的ip地址和埠。

flume-ng agent -n a1 -c /opt/bigdata/flume/conf -f /opt/bigdata/flume/conf/flume-push.conf -Dflume.root.logger=INFO,console

(2)程式碼實現如下:

package cn.test.spark

import java.net.InetSocketAddress

import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}

/**
  * sparkStreaming整合flume  推模式Push
  */
object SparkStreaming_Flume_Push {
  //newValues 表示當前批次彙總成的(word,1)中相同單詞的所有的1
  //runningCount 歷史的所有相同key的value總和
  def updateFunction(newValues: Seq[Int], runningCount: Option[Int]): Option[Int] = {
    val newCount =runningCount.getOrElse(0)+newValues.sum
    Some(newCount)
  }
  def main(args: Array[String]): Unit = {
    //配置sparkConf引數
    val sparkConf: SparkConf = new SparkConf().setAppName("SparkStreaming_Flume_Push").setMaster("local[2]")
    //構建sparkContext物件
    val sc: SparkContext = new SparkContext(sparkConf)
    //構建StreamingContext物件,每個批處理的時間間隔
    val scc: StreamingContext = new StreamingContext(sc, Seconds(5))
    //設定日誌輸出級別
    sc.setLogLevel("WARN")
    //設定檢查點目錄
    scc.checkpoint("./")
    //flume推資料過來
    // 當前應用程式部署的伺服器ip地址,跟flume配置檔案保持一致
    val flumeStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createStream(scc,"172.16.43.63",8888,StorageLevel.MEMORY_AND_DISK)

    //獲取flume中資料,資料存在event的body中,轉化為String
    val lineStream: DStream[String] = flumeStream.map(x=>new String(x.event.getBody.array()))
    //實現單詞彙總
   val result: DStream[(String, Int)] = lineStream.flatMap(_.split(" ")).map((_,1)).updateStateByKey(updateFunction)

    result.print()
    scc.start()
    scc.awaitTermination()
  }

}
}

(3)啟動執行
a. 先執行spark程式碼,

在這裡插入圖片描述
b. 然後在執行flume配置檔案。
先把/root/data/ata.txt.COMPLETED 重新命名為data.txt
在這裡插入圖片描述
(4)觀察IDEA控制檯輸出

在這裡插入圖片描述