大數據入門第二十四天——SparkStreaming(2)與flume、kafka整合
阿新 • • 發佈:2018-04-16
RM ESS 依賴 mep sock flume-ng bject 整合 master
前一篇中數據源采用的是從一個socket中拿數據,有點屬於“旁門左道”,正經的是從kafka等消息隊列中拿數據!
主要支持的source,由官網得知如下:
獲取數據的形式包括推送push和拉取pull
一、spark streaming整合flume
1.push的方式
更推薦的是pull的拉取方式
引入依賴:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId >spark-streaming-flume_2.10</artifactId>
<version>${spark.version}</version>
</dependency>
編寫代碼:
package com.streaming
import org.apache.spark.SparkConf
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
/**
* Created by ZX on 2015/6/22.
*/
object FlumePushWordCount {
def main(args: Array[String]) {
val host = args(0)
val port = args(1).toInt
val conf = new SparkConf().setAppName("FlumeWordCount")//.setMaster("local[2]")
// 使用此構造器將可以省略sc,由構造器構建
val ssc = new StreamingContext(conf, Seconds(5))
// 推送方式: flume向spark發送數據(註意這裏的host和Port是streaming的地址和端口,讓別人發送到這個地址)
val flumeStream = FlumeUtils.createStream(ssc, host, port)
// flume中的數據通過event.getBody()才能拿到真正的內容
val words = flumeStream.flatMap(x => new String(x.event.getBody().array()).split(" ")).map((_, 1))
val results = words.reduceByKey(_ + _)
results.print()
ssc.start()
ssc.awaitTermination()
}
}
flume-push.conf——flume端配置文件:
# Name the components on this agent a1.sources = r1 a1.sinks = k1 a1.channels = c1 # source a1.sources.r1.type = spooldir a1.sources.r1.spoolDir = /export/data/flume a1.sources.r1.fileHeader = true # Describe the sink a1.sinks.k1.type = avro #這是接收方 a1.sinks.k1.hostname = 192.168.31.172 a1.sinks.k1.port = 8888 # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1flume-push.conf
2.pull的方式
屬於推薦的方式,通過streaming來主動拉取flume產生的數據
編寫代碼:(依賴同上)
package com.streaming
import java.net.InetSocketAddress
import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
object FlumePollWordCount {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("FlumePollWordCount").setMaster("local[2]")
val ssc = new StreamingContext(conf, Seconds(5))
//從flume中拉取數據(flume的地址),通過Seq序列,裏面可以new多個地址,從多個flume地址拉取
val address = Seq(new InetSocketAddress("172.16.0.11", 8888))
val flumeStream = FlumeUtils.createPollingStream(ssc, address, StorageLevel.MEMORY_AND_DISK)
val words = flumeStream.flatMap(x => new String(x.event.getBody().array()).split(" ")).map((_,1))
val results = words.reduceByKey(_+_)
results.print()
ssc.start()
ssc.awaitTermination()
}
}
配置flume
通過拉取的方式需要flume的lib目錄中有相關的JAR(要通過spark程序來調flume拉取),通過官網可以得知具體的JAR信息:
配置flume:
# Name the components on this agent a1.sources = r1 a1.sinks = k1 a1.channels = c1 # source a1.sources.r1.type = spooldir a1.sources.r1.spoolDir = /export/data/flume a1.sources.r1.fileHeader = true # Describe the sink(配置的是flume的地址,等待拉取) a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink a1.sinks.k1.hostname = mini1 a1.sinks.k1.port = 8888 # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1flume-poll.conf
啟動flume,然後啟動IDEA中的spark streaming:
bin/flume-ng agent -c conf -f conf/netcat-logger.conf -n a1 -Dflume.root.logger=INFO,console
// -D後參數可選
大數據入門第二十四天——SparkStreaming(2)與flume、kafka整合