1. 程式人生 > >【慕課網實戰】Spark Streaming實時流處理項目實戰筆記五之銘文升級版

【慕課網實戰】Spark Streaming實時流處理項目實戰筆記五之銘文升級版

環境變量 local server 節點數 replicas conn 配置環境 park 所有

銘文一級:

單節點單broker的部署及使用

$KAFKA_HOME/config/server.properties
broker.id=0
listeners
host.name
log.dirs
zookeeper.connect

啟動Kafka
kafka-server-start.sh
USAGE: /home/hadoop/app/kafka_2.11-0.9.0.0/bin/kafka-server-start.sh [-daemon] server.properties [--override property=value]*

kafka-server-start.sh $KAFKA_HOME/config/server.properties

創建topic: zk
kafka-topics.sh --create --zookeeper hadoop000:2181 --replication-factor 1 --partitions 1 --topic hello_topic

查看所有topic
kafka-topics.sh --list --zookeeper hadoop000:2181

發送消息: broker
kafka-console-producer.sh --broker-list hadoop000:9092 --topic hello_topic

消費消息: zk
kafka-console-consumer.sh --zookeeper hadoop000:2181 --topic hello_topic --from-beginning


--from-beginning的使用

查看所有topic的詳細信息:kafka-topics.sh --describe --zookeeper hadoop000:2181
查看指定topic的詳細信息:kafka-topics.sh --describe --zookeeper hadoop000:2181 --topic hello_topic

單節點多broker
server-1.properties
log.dirs=/home/hadoop/app/tmp/kafka-logs-1
listeners=PLAINTEXT://:9093
broker.id=1

server-2.properties
log.dirs=/home/hadoop/app/tmp/kafka-logs-2
listeners=PLAINTEXT://:9094
broker.id=2

server-3.properties
log.dirs=/home/hadoop/app/tmp/kafka-logs-3
listeners=PLAINTEXT://:9095
broker.id=3

kafka-server-start.sh -daemon $KAFKA_HOME/config/server-1.properties &
kafka-server-start.sh -daemon $KAFKA_HOME/config/server-2.properties &
kafka-server-start.sh -daemon $KAFKA_HOME/config/server-3.properties &

kafka-topics.sh --create --zookeeper hadoop000:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic

kafka-console-producer.sh --broker-list hadoop000:9093,hadoop000:9094,hadoop000:9095 --topic my-replicated-topic
kafka-console-consumer.sh --zookeeper hadoop000:2181 --topic my-replicated-topic

kafka-topics.sh --describe --zookeeper hadoop000:2181 --topic my-replicated-topic

銘文二級:

Kafka版本下載版本為0.9.0.0比較穩定,再選相對應的scala版本(http://kafka.apache.org/downloads)

單節點單broker的部署及使用=>

配置環境變量,修改配置文件:conf/server.properties

broker.id = 0                 //唯一id值

listeners = :9092               //監聽端口號,發送的內容到broker即為此端口

hostname = hadoop000           //默認localhost也行

log.dirs = /home/app/tmp/kafka-logs     //臨時文件目錄,需建立tmp,kafka-logs可不建立

zookeeper.connect = hadoop000:2181     //創建topic,查詢topic,消耗者均為此端口

[num.partitions = 1]             //分區

啟動kafka:kafka-server-start.sh $KAFKA_HOME/config/server.properties //不知道如何使用就先執行kafka-server-start.sh

創建topic:kafka-topics.sh --create --zookeeper hadoop000:2181 --replication-factor 1 --partitions 1 --topic hello_topic

查詢所有topic:kafka-topics.sh --list --zookeeper hadoop000:2181

發送消息:kafka-console-producer.sh --broker-list hadoop000:9092 --topic hello_topic

消費消息:kafka-console-consumer.sh --zookeeper hadoop000:2181 --topic hello_topic //可加 --from-beginning 只從一開始的也消費

查看所有topic的詳細信息:kafka-topics.sh --describe --zookeeper hadoop000:2181

查看指定topic的詳細信息:kafka-topics.sh --describe --zookeeper hadoop000:2181 --topic hello_topic

詳細信息:Replicas:3,1,2 // 副本節點 Isr:3,1,2 //存活節點數

單節點多broker=>

【慕課網實戰】Spark Streaming實時流處理項目實戰筆記五之銘文升級版