1. 程式人生 > >大資料基礎之Kafka(1)簡介、安裝及使用

大資料基礎之Kafka(1)簡介、安裝及使用

http://kafka.apache.org

 

一 簡介

Kafka® is used for building real-time data pipelines and streaming apps. It is horizontally scalablefault-tolerantwicked fast, and runs in production in thousands of companies.

Kafka常用來構建實時資料管道或者流式應用。它支援水平擴充套件,容錯,並且異常的快,已經在數千家公司的生產環境中使用。written in Scala

 by LinkedIn

 

A streaming platform has three key capabilities:

  • Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
  • Store streams of records in a fault-tolerant durable way.
  • Process streams of records as they occur.

關鍵能力:釋出訂閱、訊息儲存、實時處理

First a few concepts:

  • Kafka is run as a cluster on one or more servers that can span multiple datacenters.
  • The Kafka cluster stores streams of records in categories called topics.
  • Each record consists of a key, a value, and a timestamp.

Kafka以叢集方式執行在一臺或臺個伺服器上,Kafka中儲存訊息記錄的分類叫做topics,每一個訊息記錄都包含key、value和timestamp。

 

1 傳統MQ侷限

Messaging traditionally has two models: queuing and publish-subscribe. In a queue, a pool of consumers may read from a server and each record goes to one of them; in publish-subscribe the record is broadcast to all consumers. Each of these two models has a strength and a weakness. The strength of queuing is that it allows you to divide up the processing of data over multiple consumer instances, which lets you scale your processing. Unfortunately, queues aren't multi-subscriber—once one process reads the data it's gone. Publish-subscribe allows you broadcast data to multiple processes, but has no way of scaling processing since every message goes to every subscriber.

傳統訊息佇列有兩種模型:佇列 和 釋出訂閱;

在佇列模型中,有很多消費者可以從一臺伺服器上讀訊息,並且每條訊息只會被一個消費者處理;

在釋出訂閱模型中,每條訊息都會被廣播到所有的消費者;

兩者各有利弊,佇列模型的優點是允許你把訊息處理並行分佈到多個消費者中,可以提升訊息處理速度;缺點是一旦有消費者讀到一條訊息,這條訊息就消失了;釋出訂閱模型優點是允許你廣播一條訊息給多個消費者;缺點是無法並行處理訊息;

後邊可以看到,Kafka兼具這兩種模型的優點;

 

A traditional queue retains records in-order on the server, and if multiple consumers consume from the queue then the server hands out records in the order they are stored. However, although the server hands out records in order, the records are delivered asynchronously to consumers, so they may arrive out of order on different consumers. This effectively means the ordering of the records is lost in the presence of parallel consumption. Messaging systems often work around this by having a notion of "exclusive consumer" that allows only one process to consume from a queue, but of course this means that there is no parallelism in processing.

傳統訊息佇列在伺服器端保持訊息的順序,如果有多個消費者同時從佇列中消費訊息,伺服器會按照訊息的順序派發訊息;儘管如此,由於消費者消費訊息時是非同步的,所以在消費的時候極有可能是亂序的;這表明訊息的順序在並行訊息處理中丟失了;此時,通常的做法是隻允許一個消費者來消費訊息來保證訊息被處理時的順序性;

2 Role & API

Kafka has five core APIs:

  • The Producer API allows an application to publish a stream of records to one or more Kafka topics.
  • The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them.
  • The Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
  • The Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table.
  • The AdminClient API allows managing and inspecting topics, brokers, and other Kafka objects.

Kafka中最常用的是Producer API(傳送訊息)和Consumer API(消費訊息),另外還有Streams API、Connector API、AdminClient API;

In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic TCP protocol. This protocol is versioned and maintains backwards compatibility with older version. We provide a Java client for Kafka, but clients are available in many languages.

 

3 Topic & Partition

topic is a category or feed name to which records are published. Topics in Kafka are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe to the data written to it.

topic是指訊息記錄的類別,傳送的訊息需要指定一個topic,同時一個topic可以被多個消費者消費;

For each topic, the Kafka cluster maintains a partitioned log that looks like this:

每一個topic都有一個或多個partition,每個partition對應一個log檔案

Each partition is an ordered, immutable sequence of records that is continually appended to—a structured commit log. The records in the partitions are each assigned a sequential id number called the offset that uniquely identifies each record within the partition.

每個partition由順序的訊息組成,這些訊息會順序追加到一個log檔案中;partition中的每條訊息都有一個順序id稱為offset,來唯一定義一個partition中的一條訊息;

The Kafka cluster durably persists all published records—whether or not they have been consumed—using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka's performance is effectively constant with respect to data size so storing data for a long time is not a problem.

Kafka叢集會持久化所有的訊息(無論它們是否被消費過),另外有一個保留時間配置,當保留時間設定為2天,這時一個訊息釋出後2天內都可以被消費到,超過2天它會被丟棄來釋放空間;所以Kafka的效能與資料大小無關,即使儲存很長時間的資料也沒問題;

In fact, the only metadata retained on a per-consumer basis is the offset or position of that consumer in the log. This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads records, but, in fact, since the position is controlled by the consumer it can consume records in any order it likes. For example a consumer can reset to an older offset to reprocess data from the past or skip ahead to the most recent record and start consuming from "now".

消費者端唯一的元資料就是目前消費到的每個partition的offset;offset是由消費者控制的,通常情況下一個消費者在它不斷讀訊息的同時不斷增加offset,一個消費者也可以通過修改offset來重新消費部分訊息或者跳過部分訊息;

The partitions in the log serve several purposes. First, they allow the log to scale beyond a size that will fit on a single server. Each individual partition must fit on the servers that host it, but a topic may have many partitions so it can handle an arbitrary amount of data. Second they act as the unit of parallelism—more on that in a bit.

一個topic可以有多個partitions,這樣有兩個好處,一個是允許topic的資料量超過單機容量限制,一個是支援併發(包括髮送和消費);

 

The partitions of the log are distributed over the servers in the Kafka cluster with each server handling data and requests for a share of the partitions. Each partition is replicated across a configurable number of servers for fault tolerance.

partition分佈在Kafka叢集的所有伺服器中的,每一臺伺服器都會處理一個或多個partition的請求,每個partition都會通過配置的數量進行伺服器間備份來實現容錯;

Each partition has one server which acts as the "leader" and zero or more servers which act as "followers". The leader handles all read and write requests for the partition while the followers passively replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so load is well balanced within the cluster.

 每個partition都有一個伺服器稱為leader,另外還有0個或多個伺服器稱為follower,這裡的數量由副本數決定;leader會處理該partition所有的讀寫請求,follower只是被動的同步leader的資料,如果leader掛了,其中一個follower會自動成為新的leader;每個伺服器都會作為一些partition的leader,同時也會作為另外一些partition的follower;

4 Producer & Consumer

Producers publish data to the topics of their choice. The producer is responsible for choosing which record to assign to which partition within the topic. This can be done in a round-robin fashion simply to balance load or it can be done according to some semantic partition function (say based on some key in the record). 

Consumers label themselves with a consumer group name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.

producer用來發送訊息到topic,producer可以決定將訊息傳送到哪個partition,這裡常用的是隨機策略,另外也可以根據key來分割槽,即相同key的訊息會被髮送到一個partition;

consumer通過group來分組,topic裡的每條訊息只會被一個group中的一個consumer消費到;

 

If all the consumer instances have the same consumer group, then the records will effectively be load balanced over the consumer instances.

如果所有的consuemr都在一個group裡,它們之間會自動做負載均衡;

If all the consumer instances have different consumer groups, then each record will be broadcast to all the consumer processes.

如果所有consumer都在不同的group裡,每條訊息都會被廣播到所有的consumer;

A two server Kafka cluster hosting four partitions (P0-P3) with two consumer groups. Consumer group A has two consumer instances and group B has four.

The way consumption is implemented in Kafka is by dividing up the partitions in the log over the consumer instances so that each instance is the exclusive consumer of a "fair share" of partitions at any point in time. This process of maintaining membership in the group is handled by the Kafka protocol dynamically. If new instances join the group they will take over some partitions from other members of the group; if an instance dies, its partitions will be distributed to the remaining instances.

Kafka上的訊息消費是通過在所有的consumer之間平均分配partition實現;

Kafka only provides a total order over records within a partition, not between different partitions in a topic. Per-partition ordering combined with the ability to partition data by key is sufficient for most applications. However, if you require a total order over records this can be achieved with a topic that has only one partition, though this will mean only one consumer process per consumer group.

Kafka只支援partition內部的訊息順序,這個特點加上可以通過key來控制訊息分割槽,可以滿足絕大多數應用對訊息消費有序性的需求;如果你想嚴格要求topic內訊息消費的有序性,只能通過topic內只有一個partition+消費者group內只有一個consumer來實現;

5 Guarantee

At a high-level Kafka gives the following guarantees:

  • Messages sent by a producer to a particular topic partition will be appended in the order they are sent. That is, if a record M1 is sent by the same producer as a record M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the log.
  • A consumer instance sees records in the order they are stored in the log.
  • For a topic with replication factor N, we will tolerate up to N-1 server failures without losing any records committed to the log.

 一個producer傳送到一個topic一個partition上的訊息是按照它們傳送的順序追加的,即傳送順序決定儲存順序;一個consumer消費一個topic一個partition的訊息是按照它們儲存的順序,即儲存順序決定消費順序;N個副本可以容許N-1個server掛掉;

二 安裝使用

tar  -xzf kafka_2.11-2.0.0.tgz cd  kafka_2.11-2.0.0

1 單機啟動

> bin /zookeeper-server-start .sh config /zookeeper .properties > bin /kafka-server-start .sh config /server .properties

2 命令列客戶端

2.1 建立和檢視topic

> bin /kafka-topics .sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic  test > bin /kafka-topics .sh --list --zookeeper localhost:2181 > bin /kafka-topics .sh --describe --zookeeper localhost:2181 --topic my-replicated-topic Topic:my-replicated-topic   PartitionCount:1    ReplicationFactor:3 Configs:      Topic: my-replicated-topic  Partition: 0    Leader: 1   Replicas: 1,2,0 Isr: 1,2,0
  • "leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
  • "replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
  • "isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.

2.2 生產和消費訊息

> bin /kafka-console-producer .sh --broker-list localhost:9092 --topic  test > bin /kafka-console-consumer .sh --bootstrap-server localhost:9092 --topic  test  --from-beginning

2.3 修改和檢視broker配置

> bin /kafka-configs .sh --bootstrap-server localhost:9092 --entity- type  brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 > bin /kafka-configs .sh --bootstrap-server localhost:9092 --entity- type  brokers --entity-name 0 --describe > bin /kafka-configs .sh --bootstrap-server localhost:9092 --entity- type  brokers --entity-name 0 --alter --delete-config log.cleaner.threads From Kafka version 1.1 onwards, some of the broker configs can be updated without restarting the broker. See the  Dynamic Update Modecolumn in  Broker Configs for the update mode of each broker config.
  • read-only: Requires a broker restart for update
  • per-broker: May be updated dynamically for each broker
  • cluster-wide: May be updated dynamically as a cluster-wide default. May also be updated as a per-broker value for testing.

3 叢集啟動

3.1 配置

     broker.id=1      listeners=PLAINTEXT://:9093      log.dirs=/tmp/kafka-logs-1     zookeeper.connect=

The broker.id property is the unique and permanent name of each node in the cluster. 

其他重要配置:

auto.create.topics.enable 傳送訊息時如果topic不存在是否自動建立

delete.topic.enable 是否允許刪除topic

num.partitions 自動建立topic時的預設分割槽數量

default.replication.factor 自動建立topic時的預設副本數量

broker.id.generation.enable broker的id是否自增

host.name

log.dirs 資料存放目錄,預設在/tmp,必須修改

log.cleanup.policy 資料清理策略

log.retention.bytes

log.retention.ms

log.roll.ms

log.segment.bytes

replica.lag.time.max.ms

log.message.timestamp.type

num.network.threads

num.io.threads

compression.type