1. 程式人生 > >學習筆記:從0開始學習大資料-16. kafka安裝及使用

學習筆記:從0開始學習大資料-16. kafka安裝及使用

kafka是訊息處理服務的開源軟體,高效高可用。可以作為大資料收集的工具或資料的管道。

1. 下載  http://kafka.apache.org/downloads
根據scala版本,我下載的是Scala 2.12  - kafka_2.12-2.1.0.tgz (asc, sha512)
2.解壓
tar -zxvf  kafka_2.12-2.1.0.tgz
3.啟動
(1)啟動自帶的zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties &
(2)啟動 kafka
bin/kafka-server-start.sh config/server.properties &
jps
顯示已啟動
[

[email protected] kafka_2.12-2.1.0]# jps
4477 QuorumPeerMain
8637 Jps
8062 Kafka
(3)建立一個叫做“test”的topic,它只有一個分割槽,一個副本。 下次再啟動kafka無需再次建立
 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
檢視建立的topic
bin/kafka-topics.sh --list --zookeeper localhost:2181

4.測試
(1)第一個shell終端啟動作為訊息消費者
bin/kafka-console-consumer.sh --bootstrap-server localhost:2181 --topic test --from-beginning

(2)第二個shell終端啟動作為訊息生產者
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

傳送測試訊息

[[email protected] kafka_2.12-2.1.0]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
hello
>>is working?
>

在第一個shell終端可以看到傳送的訊息


5. java 程式設計傳送,接受訊息測試

建立maven專案 pom.xml的依賴加入

 <dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.1.0</version>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-streams</artifactId>
    <version>2.1.0</version>
</dependency>

需執行兩個程式,傳送訊息程式(生產者)和接收資訊程式(消費者)

程式原始碼引自用下面的參考文章,作了些修改

MyKafkaProducer.java

package com.linbin.kafka;

import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

public class MyKafkaProducer {
	public static void main(String[] args){
	    Properties props = new Properties();
	    props.put("bootstrap.servers", "centos7:9092");
	    props.put("acks", "all");
	    props.put("retries", 0);
	    props.put("batch.size", 16384);
	    props.put("linger.ms", 1);
	    props.put("buffer.memory", 33554432);
	    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
	    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
	    Producer<String, String> producer = new KafkaProducer<String, String>(props);
	    for (int i = 0; i < 100; i++)
	        producer.send(new ProducerRecord<String, String>("test", Integer.toString(i), Integer.toString(i)));
	    producer.close();

	}
}

 MyKafkaConsumer.java

package com.linbin.kafka;


import java.util.Arrays;
import java.util.Collection;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerRebalanceListener;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;

public class MyKafkaConsumer {
	public static void main(String[] args){
	    Properties props = new Properties();
	    props.put("bootstrap.servers", "centos7:9092");
	    props.put("group.id", "test");
	    props.put("enable.auto.commit", "true");
	    props.put("auto.commit.interval.ms", "1000");
	    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
	    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
	    final KafkaConsumer<String, String> consumer = new KafkaConsumer<String,String>(props);

	    consumer.subscribe(Arrays.asList("test"),new ConsumerRebalanceListener() {
   		@Override
			public void onPartitionsAssigned(Collection<TopicPartition> arg0) {
			}
			@Override
			public void onPartitionsRevoked(Collection<TopicPartition> arg0) {

			}
	
	    });	    
	    
	    while (true) {
	        ConsumerRecords<String, String> records = consumer.poll(100);
	        for (ConsumerRecord<String, String> record : records)
	            System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
	    }
	}
}

在MyKafkaConsumer執行控制檯可以看到資訊輸出

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
offset = 211, key = null, value = ddddd
offset = 212, key = null, value = how are you
offset = 213, key = 0, value = 0
offset = 214, key = 1, value = 1
offset = 215, key = 2, value = 2
offset = 216, key = 3, value = 3
offset = 217, key = 4, value = 4
offset = 218, key = 5, value = 5

可以從shell控制檯和java程式交叉輸入資訊源和消費資訊。

 

參考:
https://www.jianshu.com/p/0e378e51b442  java Kafka 簡單應用例項
https://www.cnblogs.com/skying555/p/7903457.html  Kafka入門經典教程
https://www.cnblogs.com/hei12138/p/7805475.html  kafka實戰