Kafka安裝搭建、常見報錯、常用命令
一、安裝scala 2.11.4
1、將課程提供的scala-2.11.4.tgz使用WinSCP拷貝到spark1的/usr/local目錄下。 2、對scala-2.11.4.tgz進行解壓縮:tar -zxvf scala-2.11.4.tgz。 3、對scala目錄進行重新命名:mv scala-2.11.4 scala 4、配置scala相關的環境變數 vi .bashrc export SCALA_HOME=/usr/local/scala export PATH=$SCALA_HOME/bin source .bashrc 5、檢視scala是否安裝成功:scala -version 6、按照上述步驟在spark2和spark3機器上都安裝好scala。使用scp將scala和.bashrc拷貝到spark2和spark3上即可。
二、 安裝Kafka包
1、將kafka_2.9.2-0.8.1.tgz壓縮包使用Xshell拷貝到spark1的/usr/local目錄下。 2、對kafka_2.9.2-0.8.1.tgz進行解壓縮:tar -zxvf kafka_2.9.2-0.8.1.tgz。 3、對kafka目錄進行改名:mv kafka_2.9.2-0.8.1 kafka 4、配置kafka vim /usr/local/kafka/config/server.properties broker.id:依次增長的整數,0、1、2、3、4,叢集中Broker的唯一id zookeeper.connect=192.168.1.107:2181,192.168.1.108:2181,192.168.1.109:2181 5、安裝slf4j 將slf4j-1.7.6.zip壓縮包上傳到/usr/local目錄下 unzip slf4j-1.7.6.zip 把slf4j中的slf4j-nop-1.7.6.jar複製到kafka的libs目錄下面 6、按照上述步驟在spark2和spark3分別安裝kafka。用scp把kafka拷貝到spark2和spark3行即可。 7、唯一區別的,就是server.properties中的broker.id,要設定為1和2
三、 啟動kafka叢集
1、在三臺機器上分別執行以下命令: 前臺啟動: [[email protected] bin]./kafka-server-start.sh ../config/server.properties 後臺啟動: [[email protected] bin]./kafka-server-start.sh ../config/server.properties 1>/dev/null 2>&1 & ./kafka-server-start.sh -daemon ../config/server.properties 指定監聽埠 JMX_PORT=2898 ./ kafka-server-start.sh ../config/server.properties 1>/dev/null 2>&1 & 2、解決kafka Unrecognized VM option 'UseCompressedOops'問題 vi bin/kafka-run-class.sh if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true" fi 去掉-XX:+UseCompressedOops即可 3、使用jps檢查啟動是否成功
四、 測試kafka叢集
#建立topic:
bin/kafka-topics.sh --zookeeper 192.168.88.200:2181,192.168.88.101:2181,192.168.88.102:2181 --topic wangcc --replication-factor 1 --partitions 1 --creat
#往Kafka的topic中寫入資料(命令列的生成者)
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic wangcc
#啟動消費者
./kafka-console-consumer.sh --zookeeper weekend01:2181,weekend02:2181,weekend03:2181 --topic wangcc --from-beginning
安裝搭建
1. kafka建立topic出現:
Error while executing topic command org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/ids
原因: 沒有在kafka目錄下建立zookeeper ,指定myid
解決:
cd /uardata10/chbtmp/package/kafka_2.10-0.8.2.1
mkdir zookeeper
cd zookeeper
touch myid
echo 0 > myid
重新啟動kafka就ok
1.1 、建立topic的paration大於可用的boker
建立topic 指定partitions 為5 而 borker只有一個
./bin/kafka-topics.sh -zookeeper idc007128:2181,idc007124:2181,idc007123:2181 -topic test -replication-factor 2 -partitions 5 -create
##問題
Error while executing topic command replication factor: 2 larger than available brokers: 1
解決:
-replication-factor 1 副本數為1
[[email protected] kafka_2.10-0.8.2.1]# ./bin/kafka-topics.sh -zookeeper idc007128:2181,idc007124:2181,idc007123:2181 -topic test -replication-factor 1 -partitions 1 -create
Created topic "test".
原因: 複製config/server.properties沒有修改主機名和borker.id
[2017-06-14 18:07:55,583] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentBrokerIdException: Configured brokerId 0 doesn't match stored brokerId 1 in meta.properties
at kafka.server.KafkaServer.getBrokerId(KafkaServer.scala:630)
at kafka.server.KafkaServer.startup(KafkaServer.scala:175)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
解決:
第一步: 修改config/server.properties中的boker.id 及主機名
第二步:刪除log.dirs=/tmp/kafka-logs目錄, 重新啟動
3、kafka配置檔案錯誤
[2018-10-12 08:35:04,553] ERROR Error when sending message to topic TestTopic with key: null, value: 12 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-10-12 08:35:05,335] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-10-12 08:35:06,326] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-10-12 08:35:07,309] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-10-12 08:35:08,444] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
解決:修改server.properties檔案
listeners=PLAINTEXT://:9092
4、Kafka登入之後立刻又自動宕掉
kafka登入一段時間之後自動宕掉解決辦法,在登入命令加daemon
nohup bin/kafka-server-start.sh -daemon ./config/server.properties &(後臺登入方式)
配置好的kafka叢集登入後會立刻自動宕掉,經檢視kafka日誌發現,配置檔案中的broker.id與meta.properties檔案中的broker.id不相符
config/server.properties配置檔案中屬性broker.id=0(broker的全域性唯一編號,不能重複)與該配置檔案中的另一個屬性log.dirs=/export/servers/logs/kafka(後面路徑檢視自己的配置)路徑內的meta.properties檔案中broker.id值不一致,導致登入失敗,修改後成功登入。
常見命令:
1.建立topic
kafka-topics.sh --zookeeper 192.168.88.200:2181,192.168.88.101:2181,192.168.88.102:2181 --topic wangcc (有空格)--replication-factor 1 --partitions 1 --create
2.刪除指定topic
kafka-topics.sh --zookeeper localhost:2181 --delete --topic topic-test
3.檢視已建立的topic列表
kafka-topics.sh --zookeeper localhost:2181 --list
4.檢視topic的資訊
kafka-topics.sh --zookeeper localhost:2181 --describe --topic myTopic
5.往Kafka的topic中寫入資料(命令列的生成者)
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic wangcc
6.啟動消費者
./kafka-console-consumer.sh --zookeeper weekend01:2181,weekend02:2181,weekend03:2181 --topic wangcc --from-beginning