1. 程式人生 > >Spark Streaming原始碼解讀之No Receivers詳解

Spark Streaming原始碼解讀之No Receivers詳解

背景:
目前No Receivers在企業中使用的越來越多。No Receivers具有更強的控制度,語義一致性。No Receivers是我們操作資料來源自然方式,操作資料來源使用一個封裝器,且是RDD型別的。所以Spark Streaming就產生了自定義RDD –> KafkaRDD.

原始碼分析:
1. KafkaRDD:

**
 * A batch-oriented interface for consuming from Kafka.
 * Starting and ending offsets are specified in advance,
 * so that you can control exactly-once semantics.
 * @param
kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration"> * configuration parameters</a>. Requires "metadata.broker.list" or "bootstrap.servers" to be set * with Kafka broker(s) specified in host1:port1,host2:port2 form. * @param offsetRanges offset ranges that define the
Kafka data belonging to this RDD * @param messageHandler function for translating each message into the desired type */ private[kafka] class KafkaRDD[ K: ClassTag, V: ClassTag, U <: Decoder[_]: ClassTag,//因為傳輸的時候需要編碼,所以需要Decoder T <: Decoder[_]: ClassTag, R: ClassTag] private[spark] ( sc: SparkContext, kafkaParams: Map[String, String], val offsetRanges: Array[OffsetRange],//offsetRanges指定資料範圍
leaders: Map[TopicAndPartition, (String, Int)], messageHandler: MessageAndMetadata[K, V] => R ) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges {
2.  HasOffsetRanges: RDD是a list of partitions.
/**
 * Represents any object that has a collection of [[OffsetRange]]s. This can be used to access the
 * offset ranges in RDDs generated by the direct Kafka DStream (see
 * [[KafkaUtils.createDirectStream()]]).
 * {{{
//foreachRDD就可以獲取當前batch Duractions中的產生的RDD的分割槽的資料。
 *   KafkaUtils.createDirectStream(...).foreachRDD { rdd =>
 *      val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
 *      ...
 *   }
 * }}}
 */

trait HasOffsetRanges {
  def offsetRanges: Array[OffsetRange]
}
3.  Count: 此時的範圍指的是多少條資料。OffsetRange指定了什麼topic下的什麼partiiton下的讀取資料範圍。
/**
 * Represents a range of offsets from a single Kafka TopicAndPartition. Instances of this class
 * can be created with `OffsetRange.create()`.
 * @param topic Kafka topic name
 * @param partition Kafka partition id
 * @param fromOffset Inclusive starting offset
 * @param untilOffset Exclusive ending offset
 */

final class OffsetRange private(
    val topic: String,
    val partition: Int,
    val fromOffset: Long,
    val untilOffset: Long) extends Serializable {
  import OffsetRange.OffsetRangeTuple
//offset訊息的偏移量
/** Number of messages this OffsetRange refers to */
def count(): Long = untilOffset - fromOffset
  1. 在KafkaRDD中getPartition
override def getPartitions: Array[Partition] = {
  offsetRanges.zipWithIndex.map { case (o, i) =>
      val (host, port) = leaders(TopicAndPartition(o.topic, o.partition))
      new KafkaRDDPartition(i, o.topic, o.partition, o.fromOffset, o.untilOffset, host, port)
  }.toArray
}
5.  KafkaRDDPartition:  相當於Kafka資料來源的指標。或者相當於引用。清晰的說明資料在哪裡。
/** @param topic kafka topic name
  * @param partition kafka partition id
  * @param fromOffset inclusive starting offset
  * @param untilOffset exclusive ending offset
  * @param host preferred kafka host, i.e. the leader at the time the rdd was created
  * @param port preferred kafka host's port
  */
private[kafka]
class KafkaRDDPartition(
  val index: Int,
  val topic: String,
  val partition: Int,
  val fromOffset: Long,
  val untilOffset: Long,
  val host: String, //就是讀取資料來源的host,port同樣是
  val port: Int
) extends Partition {
//KafkaRDD的一個partition只能屬於一個topic
  /** Number of messages this partition refers to */
  def count(): Long = untilOffset - fromOffset
}
6.  KafkaRDD中的compute計算每個資料分片
override def compute(thePart: Partition, context: TaskContext): Iterator[R] = {
  val part = thePart.asInstanceOf[KafkaRDDPartition]
  assert(part.fromOffset <= part.untilOffset, errBeginAfterEnd(part))
//如果fromOffset和untilOffset相等表面該訊息是空
  if (part.fromOffset == part.untilOffset) {
    log.info(s"Beginning offset ${part.fromOffset} is the same as ending offset " +
      s"skipping ${part.topic} ${part.partition}")
    Iterator.empty
  } else {
    new KafkaRDDIterator(part, context)
  }
}
7.  KafkaRDDIterator: 獲取資料
private class KafkaRDDIterator(
    part: KafkaRDDPartition,
    context: TaskContext) extends NextIterator[R] {

  context.addTaskCompletionListener{ context => closeIfNeeded() }

  log.info(s"Computing topic ${part.topic}, partition ${part.partition} " +
    s"offsets ${part.fromOffset} -> ${part.untilOffset}")

  val kc = new KafkaCluster(kafkaParams)
  val keyDecoder = classTag[U].runtimeClass.getConstructor(classOf[VerifiableProperties])
    .newInstance(kc.config.props)
    .asInstanceOf[Decoder[K]]
  val valueDecoder = classTag[T].runtimeClass.getConstructor(classOf[VerifiableProperties])
    .newInstance(kc.config.props)
    .asInstanceOf[Decoder[V]]
  val consumer = connectLeader
  var requestOffset = part.fromOffset
  var iter: Iterator[MessageAndOffset] = null

  // The idea is to use the provided preferred host, except on task retry atttempts,
  // to minimize number of kafka metadata requests
  private def connectLeader: SimpleConsumer = {
    if (context.attemptNumber > 0) {
      kc.connectLeader(part.topic, part.partition).fold(
        errs => throw new SparkException(
          s"Couldn't connect to leader for topic ${part.topic} ${part.partition}: " +
            errs.mkString("\n")),
        consumer => consumer
      )
    } else {
      kc.connect(part.host, part.port)
    }
  }

  private def handleFetchErr(resp: FetchResponse) {
    if (resp.hasError) {
      val err = resp.errorCode(part.topic, part.partition)
      if (err == ErrorMapping.LeaderNotAvailableCode ||
        err == ErrorMapping.NotLeaderForPartitionCode) {
        log.error(s"Lost leader for topic ${part.topic} partition ${part.partition}, " +
          s" sleeping for ${kc.config.refreshLeaderBackoffMs}ms")
        Thread.sleep(kc.config.refreshLeaderBackoffMs)
      }
      // Let normal rdd retry sort out reconnect attempts
      throw ErrorMapping.exceptionFor(err)
    }
  }

  private def fetchBatch: Iterator[MessageAndOffset] = {
    val req = new FetchRequestBuilder()
      .addFetch(part.topic, part.partition, requestOffset, kc.config.fetchMessageMaxBytes)
      .build()
    val resp = consumer.fetch(req)
    handleFetchErr(resp)
    // kafka may return a batch that starts before the requested offset
    resp.messageSet(part.topic, part.partition)
      .iterator
      .dropWhile(_.offset < requestOffset)
  }

  override def close(): Unit = {
    if (consumer != null) {
      consumer.close()
    }
  }

  override def getNext(): R = {
    if (iter == null || !iter.hasNext) {
      iter = fetchBatch
    }
    if (!iter.hasNext) {
      assert(requestOffset == part.untilOffset, errRanOutBeforeEnd(part))
      finished = true
      null.asInstanceOf[R]
    } else {
      val item = iter.next()
      if (item.offset >= part.untilOffset) {
        assert(item.offset == part.untilOffset, errOvershotEnd(item.offset, part))
        finished = true
        null.asInstanceOf[R]
      } else {
        requestOffset = item.nextOffset
        messageHandler(new MessageAndMetadata(
          part.topic, part.partition, item.message, item.offset, keyDecoder, valueDecoder))
      }
    }
  }
}
8.  KafkaCluster:封裝了與Kafka叢集互動
/**
 * Convenience methods for interacting with a Kafka cluster.
 * @param kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration">
 * configuration parameters</a>.
 *   Requires "metadata.broker.list" or "bootstrap.servers" to be set with Kafka broker(s),
 *   NOT zookeeper servers, specified in host1:port1,host2:port2 form
 */
private[spark]
class KafkaCluster(val kafkaParams: Map[String, String]) extends Serializable {

舉例:一般使用KafkaUtils的createDirectStream讀取資料。

KafkaUtils.createDirectStream[String,String,StringDecoder,StringDecoder](ssc,kafkaParams,topics)
9.  在使用Kafka的Direct方式操作資料的時候,通過使用
/**
 * Create an input stream that directly pulls messages from Kafka Brokers
 * without using any receiver. This stream can guarantee that each message
 * from Kafka is included in transformations exactly once (see points below).
 *
 * Points to note:
 *  - No receivers: This stream does not use any receiver. It directly queries Kafka
 *  - Offsets: This does not use Zookeeper to store offsets. The consumed offsets are tracked
 *    by the stream itself. For interoperability with Kafka monitoring tools that depend on
 *    Zookeeper, you have to update Kafka/Zookeeper yourself from the streaming application.
 *    You can access the offsets used in each batch from the generated RDDs (see
 *    [[org.apache.spark.streaming.kafka.HasOffsetRanges]]).
 *  - Failure Recovery: To recover from driver failures, you have to enable checkpointing
 *    in the [[StreamingContext]]. The information on consumed offset can be
 *    recovered from the checkpoint. See the programming guide for details (constraints, etc.).
 *  - End-to-end semantics: This stream ensures that every records is effectively received and
 *    transformed exactly once, but gives no guarantees on whether the transformed data are
 *    outputted exactly once. For end-to-end exactly-once semantics, you have to either ensure
 *    that the output operation is idempotent, or use transactions to output records atomically.
 *    See the programming guide for more details.
 *
 * @param ssc StreamingContext object
 * @param kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration">
 *   configuration parameters</a>. Requires "metadata.broker.list" or "bootstrap.servers"
 *   to be set with Kafka broker(s) (NOT zookeeper servers), specified in
 *   host1:port1,host2:port2 form.
 *   If not starting from a checkpoint, "auto.offset.reset" may be set to "largest" or "smallest"
 *   to determine where the stream starts (defaults to "largest")
 * @param topics Names of the topics to consume
 * @tparam K type of Kafka message key
 * @tparam V type of Kafka message value
 * @tparam KD type of Kafka message key decoder
 * @tparam VD type of Kafka message value decoder
 * @return DStream of (Kafka message key, Kafka message value)
 */
def createDirectStream[
  K: ClassTag,
  V: ClassTag,
  KD <: Decoder[K]: ClassTag,
  VD <: Decoder[V]: ClassTag] (
    ssc: StreamingContext,
    kafkaParams: Map[String, String],
    topics: Set[String]
): InputDStream[(K, V)] = {
  val messageHandler = (mmd: MessageAndMetadata[K, V]) => (mmd.key, mmd.message)

  val kc = new KafkaCluster(kafkaParams)
//kc 是Kafka的叢集
//fromOffsets獲取具體的偏移量
  val fromOffsets = getFromOffsets(kc, kafkaParams, topics)
  new DirectKafkaInputDStream[K, V, KD, VD, (K, V)](
    ssc, kafkaParams, fromOffsets, messageHandler)
}
    10. getFromOffsets:
private[kafka] def getFromOffsets(
    kc: KafkaCluster,
    kafkaParams: Map[String, String],
    topics: Set[String]
  ): Map[TopicAndPartition, Long] = {
  val reset = kafkaParams.get("auto.offset.reset").map(_.toLowerCase)
  val result = for {
    topicPartitions <- kc.getPartitions(topics).right
    leaderOffsets <- (if (reset == Some("smallest")) {
      kc.getEarliestLeaderOffsets(topicPartitions)
    } else {
      kc.getLatestLeaderOffsets(topicPartitions)
    }).right
  } yield {
    leaderOffsets.map { case (tp, lo) =>
//建立Kafka Direct DStream的時候,他會和Kafka叢集進行互動,來獲得他的partition和offset的資訊。
//實質上是通過DirectKafkaInputDStream
        (tp, lo.offset)
    }
  }
  KafkaCluster.checkErrors(result)
}
    11. 不同topic的partition對應著我們生成的KafkaRDD的partition.
/**
 *  A stream of {@link org.apache.spark.streaming.kafka.KafkaRDD} where
 * each given Kafka topic/partition corresponds to an RDD partition.
 * The spark configuration spark.streaming.kafka.maxRatePerPartition gives the maximum number
 *  of messages
 * per second that each '''partition''' will accept.
 * Starting offsets are specified in advance,
 * and this DStream is not responsible for committing offsets,
 * so that you can control exactly-once semantics.
 * For an easy interface to Kafka-managed offsets,
 *  see {@link org.apache.spark.streaming.kafka.KafkaCluster}
 * @param kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration">
 * configuration parameters</a>.
 *   Requires "metadata.broker.list" or "bootstrap.servers" to be set with Kafka broker(s),
 *   NOT zookeeper servers, specified in host1:port1,host2:port2 form.
 * @param fromOffsets per-topic/partition Kafka offsets defining the (inclusive)
 *  starting point of the stream
 * @param messageHandler function for translating each message into the desired type
 */
private[streaming]
class DirectKafkaInputDStream[
  K: ClassTag,
  V: ClassTag,
  U <: Decoder[K]: ClassTag,
  T <: Decoder[V]: ClassTag,
  R: ClassTag](
    ssc_ : StreamingContext,
    val kafkaParams: Map[String, String],
    val fromOffsets: Map[TopicAndPartition, Long],
    messageHandler: MessageAndMetadata[K, V] => R
  ) extends InputDStream[R](ssc_) with Logging {
12. DirectKafkaInputDStream原始碼如下:每次compute之後會產生KafkaRDD
override def compute(validTime: Time): Option[KafkaRDD[K, V, U, T, R]] = {
// untilOffsets需要獲取的資料區間。這樣你就知道你要計算多少條資料了。
val untilOffsets = clamp(latestLeaderOffsets(maxRetries))
//建立RDD例項,所以這裡DirectKafkaInputDStream和RDD是一一對應的。
  val rdd = KafkaRDD[K, V, U, T, R](
    context.sparkContext, kafkaParams, currentOffsets, untilOffsets, messageHandler)

  // Report the record number and metadata of this batch interval to InputInfoTracker.
  val offsetRanges = currentOffsets.map { case (tp, fo) =>
    val uo = untilOffsets(tp)
    OffsetRange(tp.topic, tp.partition, fo, uo.offset)
  }
  val description = offsetRanges.filter { offsetRange =>
    // Don't display empty ranges.
    offsetRange.fromOffset != offsetRange.untilOffset
  }.map { offsetRange =>
    s"topic: ${offsetRange.topic}\tpartition: ${offsetRange.partition}\t" +
      s"offsets: ${offsetRange.fromOffset} to ${offsetRange.untilOffset}"
  }.mkString("\n")
  // Copy offsetRanges to immutable.List to prevent from being modified by the user
  val metadata = Map(
    "offsets" -> offsetRanges.toList,
    StreamInputInfo.METADATA_KEY_DESCRIPTION -> description)
  val inputInfo = StreamInputInfo(id, rdd.count, metadata)
  ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)

  currentOffsets = untilOffsets.map(kv => kv._1 -> kv._2.offset)
  Some(rdd)
}

總體流程如下:
這裡寫圖片描述

採用Direct的好處?
1. Direct方式沒有資料快取,因此不會出現記憶體溢位。但是如果採用的Receiver的話就需要快取。
2. 如果採用Receiver的方式的華,不方便做分散式,而Direct方式預設資料就在多臺機器上。
3. 在實際操作的時候如果採用Receiver的方式的弊端是假設資料來不及處理,但是Direct就不會,因為是直接讀取資料。
4. 語義一致性,Direct的方式資料一定會被執行。