1. 程式人生 > >centos 7 kafka 2.10-0.10.2.0 叢集

centos 7 kafka 2.10-0.10.2.0 叢集

 快速開始:
 關於kafka說明可以參考:    0) kafka叢集的安裝, 準備三臺伺服器
    server1:192.168.0.1     server2:192.168.0.2     server3:192.168.0.3  
   2) 解壓            tar -zxvf kafka_2.10-0.10.2.0.tgz            mv kafka_2.10-0.10.2.0   /usr/local/kafka           mkdir  /usr/local/kafka/logs    3) 配置        修改/usr/local/kafka/config/server.properties, 其中broker.id, log.dirs, zookeeper.connect 必須根據實際情況進行修改,其他項根據需要自行斟酌。    如下:    #========================================== broker.id=1  
    #port=9092   #預設     num.network.threads=3       num.io.threads=8       socket.send.buffer.bytes=1048576       socket.receive.buffer.bytes=1048576       socket.request.max.bytes=104857600       log.dir=/usr/local/kafka/logs       num.partitions=2     num.recovery.threads.per.data.dir=1     log.retention.hours=168        log.segment.bytes=536870912      log.retention.check.interval.ms=300000   zookeeper.connect=192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/
kafka     zookeeper.connection.timeout.ms=6000       #==========================================
   注:在zookeeper 建立一個kafka結點,這樣好管理。 4) 啟動kafka   針對伺服器server2,server3可以將server1複製到相應的目錄:      scp -r /usr/local/kafka [email protected]:/usr/local/      scp -r /usr/local/kafka [email protected]
:/usr/local/

  cd /usr/local/kafka   修改三臺機子:config/server.properties     server1 :       broker.id=1   server2 :       broker.id=2   server3 :       broker.id=3   啟動     /usr/local/kafka/bin/kafka-server-start.sh  /usr/local/kafka/config/server.properties &  
  檢視      lsof -i:9092  5) 建立Topic(包含一個分割槽,三個副本)   /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper  192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/kafka  --replication-factor 3 --partitions 1 --topic  mytopic
  #刪除   #/usr/local/kafka/bin/kafka-topics.sh --delete  --zookeeper 192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/kafka  --topic  mytopic
6) 檢視topic情況   /usr/local/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/kafka
7) 建立傳送者(發)    ./bin/kafka-console-producer.sh --broker-list  192.168.0.1:9092,192.168.0.2:9092,192.168.0.3:9092  --topic mytopic     my msg1     my msg2     ^C 8)  建立消費者(收)    /usr/local/kafka/bin/kafka-console-consumer.sh --zookeeper  192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/kafka  --topic  mytopic  --from-beginning     my msg1     my msg2     ^C 9)  檢視描述
   /usr/local/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.0.1:2181,192.168.0.2:2181,192.168.0.3:2181/kafka  --topic
mytopic
10)  殺掉server1上的broker   kill `lsof -i:9092 | sed -n '2p' | awk '{print $2}'` 

  kill `ps -ef | grep kafka.Kafka | grep -v grep | awk '{print $2}'`

=============================================================

常用問題:

=============================================================

1、 啟動失敗 正則解釋出錯
[2017-04-29 19:25:54,810] FATAL  (kafka.Kafka$)
java.lang.VerifyError: Uninitialized object exists on backward branch 162
Exception Details:
  Location:
    scala/util/matching/Regex.unapplySeq(Lscala/util/matching/Regex$Match;)Lscala/Option; @216: goto
  Reason:
    Error exists in the bytecode
  Bytecode:
    0x0000000: 2bc6 000a 2bb6 00ef c700 07b2 0052 b02b
    0x0000010: b600 f2b6 00f3 2ab6 0054 4d59 c700 0b57
    0x0000020: 2cc6 000d a700 c92c b600 f799 00c2 bb00
    0x0000030: 6059 b200 65b2 006a 043e c700 0501 bf1d
    0x0000040: 2bb6 00f8 b600 74b6 0078 2bba 0100 0000
    0x0000050: b200 93b6 0097 3a06 3a05 59c7 0005 01bf
    0x0000060: 3a04 1906 b200 93b6 009b a600 7619 04b2
    0x0000070: 00a0 a600 09b2 00a0 a700 71bb 00a2 5919
    0x0000080: 04b6 00a8 3a0b 2b19 0bb8 00fc b200 a0b7
    0x0000090: 00ac 3a07 1907 3a08 1904 b600 afc0 00a4
    0x00000a0: 3a09 1909 b200 a0a5 0034 bb00 a259 1909
    0x00000b0: b600 a83a 0b2b 190b b800 fcb2 00a0 b700
    0x00000c0: ac3a 0a19 0819 0ab6 00b3 190a 3a08 1909
    0x00000d0: b600 afc0 00a4 3a09 a7ff ca19 07a7 000c
    0x00000e0: 1904 1905 1906 b800 b9b7 00bc b02a 2bb6
    0x00000f0: 00ef b601 02b0                         
  Stackmap Table:
    same_frame(@11)
    same_frame(@15)
    full_frame(@39,{Object[#2],Object[#34],Object[#86]},{Object[#86]})
    same_frame(@46)
    full_frame(@63,{Object[#2],Object[#34],Object[#86],Integer},{Uninitialized[#46],Uninitialized[#46],Object[#98]})
    full_frame(@96,{Object[#2],Object[#34],Object[#86],Integer,Top,Object[#206],Object[#208]},{Uninitialized[#46],Uninitialized[#46],Object[#164]})
    full_frame(@123,{Object[#2],Object[#34],Object[#86],Integer,Object[#164],Object[#206],Object[#208]},{Uninitialized[#46],Uninitialized[#46]})
    full_frame(@162,{Object[#2],Object[#34],Object[#86],Integer,Object[#164],Object[#206],Object[#208],Object[#162],Object[#162],Object[#164],Top,Object[#4]},{Uninitialized[#46],Uninitialized[#46]})
    full_frame(@219,{Object[#2],Object[#34],Object[#86],Integer,Object[#164],Object[#206],Object[#208],Object[#162],Object[#162],Object[#164],Top,Object[#4]},{Uninitialized[#46],Uninitialized[#46]})
    full_frame(@224,{Object[#2],Object[#34],Object[#86],Integer,Object[#164],Object[#206],Object[#208]},{Uninitialized[#46],Uninitialized[#46]})
    full_frame(@233,{Object[#2],Object[#34],Object[#86],Integer,Object[#164],Object[#206],Object[#208]},{Uninitialized[#46],Uninitialized[#46],Object[#4]})
    full_frame(@237,{Object[#2],Object[#34],Object[#86]},{})

	at scala.collection.immutable.StringLike.r(StringLike.scala:287)
	at scala.collection.immutable.StringLike.r$(StringLike.scala:287)
	at scala.collection.immutable.StringOps.r(StringOps.scala:29)
	at scala.collection.immutable.StringLike.r(StringLike.scala:276)
	at scala.collection.immutable.StringLike.r$(StringLike.scala:276)
	at scala.collection.immutable.StringOps.r(StringOps.scala:29)
	at kafka.cluster.EndPoint$.<init>(EndPoint.scala:29)
	at kafka.cluster.EndPoint$.<clinit>(EndPoint.scala)
	at kafka.server.Defaults$.<init>(KafkaConfig.scala:63)
	at kafka.server.Defaults$.<clinit>(KafkaConfig.scala)
	at kafka.server.KafkaConfig$.<init>(KafkaConfig.scala:616)
	at kafka.server.KafkaConfig$.<clinit>(KafkaConfig.scala)
	at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
	at kafka.Kafka$.main(Kafka.scala:58)
	at kafka.Kafka.main(Kafka.scala)


查了一天都無助了,最後查原始碼,感覺好是jdk的問題,出錯的jdk是:jdk-8u20-linux-x64.tar.gz ,換成 jdk-8u131-linux-x64.tar.gz 就能正常啟動了。

2、doesn’t match stored brokerId 0 in meta.properties

錯誤的原因是log.dirs目錄下的meta.properties中配置的broker.id和配置目錄下的server.properties中的broker.id不一致了,解決問題的方法是將兩者修改一致後再重啟。

3、WARN Error while fetching metadata with correlation id  錯誤處理

[2016-10-14 06:36:18,401] WARN Error while fetching metadata with correlation id 0 : {test333=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2016-10-14 06:36:19,543] WARN Error while fetching metadata with correlation id 1 : {test333=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2016-10-14 06:36:19,680] WARN Error while fetching metadata with correlation id 2 : {test333=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2016-10-14 06:36:19,908] WARN Error while fetching metadata with correlation id 3 : {test333=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2016-10-14 06:36:20,116] WARN Error while fetching metadata with correlation id 4 : {test333=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2016-10-14 06:36:20,334] WARN Error while fetching metadata with correlation id 5 : {test333=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2016-10-14 06:36:20,505] WARN Error while fetching metadata with correlation id 6 : {test333=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2016-10-14 06:36:20,757] WARN Error while fetching metadata with correlation id 7 : {test333=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

解決:config/server.properties kafka必須配置hostname

4、java.io.IOException: Connection to 127.0.0.1:9092 (id: 0 rack: null) failed
java.io.IOException: Connection to 127.0.0.1:9092 (id: 0 rack: null) failed
    at kafka.utils.NetworkClientBlockingOps$.awaitReady$1(NetworkClientBlockingOps.scala:84)
    at kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:94)
    at kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:244)
    at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:234)
    at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
    at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118)
    at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:103)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

未有停機的情況現出這個錯誤,解決: rm -rf /usr/local/kafka/logs/*

相關推薦

centos 7 kafka 2.10-0.10.2.0 叢集

 快速開始:  關於kafka說明可以參考:    0) kafka叢集的安裝, 準備三臺伺服器     server1:192.168.0.1     server2:192.168.0.2     server3:192.168.0.3  

Linux(CentOS 7)+ Nginx(1.10.2)+ Mysql(5.7.16)+ PHP(7.0.12)完整環境搭建

首先安裝Linux系統,我以虛擬機器安裝來做示例,先去下載 VitualBox,這是一款開源的虛擬機器軟體,https://www.virtualbox.org 官網地址。或者是VMware,www.vmware.com,不過這個軟體是收費的。當然同時還要去下載一個Linux

centos 7 安裝Oracle Database 11g Release 2 (11.2.0.4)

network 永久 oinstall ima play arc glibc lpad tails 參考文章: http://www.cnblogs.com/lightnear/archive/2012/10/07/2714247.html http://www.cnblo

CentOS 7.4 yum方式安裝Redis 4.0.10

此處通過參考學習並備份如何通過 yum 方式安裝最新版的 Redis。1. 安裝 Remi 源。yum install -y http://rpms.famillecollet.com/enterprise/remi-release-7.rpm2. 安裝 Redis。yum

centos 7 安裝 nginx-1.11.10(騰訊雲)

解決 AC .gz direct 輸入密碼 blank elf not run 騰訊雲 在centos 7 下安裝 nginx-1.11 前需要先切換到root環境,通過命令 su root 切換,然後再輸入密碼, 如果不能切換需要把下載的nginx文件夾給予777的權限

CentOS 7.4 編譯安裝 Nginx1.15.2

本文主要記錄如何在CentOS7.4中編譯安裝Nginx官方最新的1.15.2版本。由於像Nginx、Mysql和PHP7的的原始碼都是用C/C++寫的,所以自己的CentOS 7.4伺服器上必須要安裝gcc和g++軟體。 安裝環境 系統:Ce

Centos 7.3安裝HGDB 4.3.2說明文件

[[email protected] upload]# vi /etc/sysctl.conf [[email protected] upload]# sysctl -p net.core.wmem_default = 262144 fs.file-max = 767

CentOS 7.4安裝Zabbix 3.4.2

QQ交流群:64655993 安裝環境 [[email protected] ~]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) 關閉防火牆 Centos 7.3開始iptables就不存在了

centos 7.x安裝fastdfs 5.10

軟體倉庫地址: https://github.com/happyfish100/libfastcommon https://github.com/happyfish100/fastdfs https://github.com/happyfish100/fastdfs-nginx-module 實驗環境: vs

CentOS 7 安裝、配置JDK-10

1、下載JDK-10 選擇對應Linux版本的tar.gz 官網連結 複製到Centos中的Download目錄下 2、解除安裝預裝的OpenJDK 檢視自帶OpenJDK及相關檔案 java -version 查詢帶有java字串的檔案 rpm -qa

CentOS 7下Cloudera Manager及CDH 6.0.1安裝過程詳解

一、概念介紹 1、CDH 概覽 CDH是Apache Hadoop和相關專案的最完整、最受測試和最流行的發行版。CDH提供Hadoop的核心元素-可伸縮儲存和分散式計算-以及基於web的使用者介面和重要的企業功能。CDH是Apache許可的開放原始碼,是唯一提供統一批處理、互動式SQL和互動式搜尋以及基於

CentOS 7.4 64位安裝配置MySQL8.0

第一步:獲取mysql YUM源 image.png 點選下載 image.png 獲取到下載連結:https://repo.mysql.com//mysql80-community-release-el7-1.noarch.rpm 第二步:下載和安

CentOS-7使用kubeadm安裝Kubernetes-1.12.0(how & why)

前言 安裝部署看似基礎,但其中蘊含許多值得深挖的原理。本篇文章不同於一般的部署文章的區別是,除了闡述基本的安裝部署過程,還詳細備註了每一步為何要如此配置,希望能讓讀者知其然更知其所以然。 1. 準備工作 1.1 環境資訊 作業系統:CentOS-7.5.1804

Centos 7下VMware三臺虛擬機器Hadoop叢集初體驗

一、下載並安裝Centos 7   傳送門:https://www.centos.org/download/      注:下載DVD ISO映象 這裡詳解一下VMware安裝中的兩個過程 網絡卡配置 二、SecureCRT遠端操控 p

CentOS 7.5安裝配置WebLogic 12c雙機叢集

一、演示環境: IP OS JDK WebLogic 記憶體 伺服器角色 埠 192.168.1.144 CentOS   7.5 x86_64 jdk-8u192-linux-x6

如何在CentOS 7上使用HAproxy Loadbalancer設定Percona XtraDB叢集(負載均衡)

翻譯&轉載來源:https://linoxide.com/cluster/setup-percona-cluster-haproxy-centos-7/   如何在CentOS 7上使用HAproxy Loadbalancer設定Percona叢集 我們之前展示瞭如何使

ArcGIS Enterprise 10.5.1 靜默安裝部署記錄(Centos 7.2 minimal版)- 2、安裝Portal for ArcGIS

-a 切換 https stop user 安裝 執行 limits 方式 安裝Portal for ArcGIS 解壓portal安裝包,tar -xzvf Portal_for_ArcGIS_Linux_1051_156440.tar.gz 切換到arcgis賬戶靜

ArcGIS Enterprise 10.5.1 靜默安裝部署記錄(Centos 7.2 minimal版)- 3、安裝 ArcGIS for Server

切換 驗證 裝包 start dap sof 訪問權限 tar 服務 安裝ArcGIS for Server 解壓server安裝包,tar -xzvf ArcGIS_Server_Linux_1051_156429.tar.gz 切換到arcgis賬戶靜默安裝serv

ArcGIS Enterprise 10.5.1 靜默安裝部署記錄(Centos 7.2 minimal版)- 1、安裝前準備

計算機 boot thread connect conf 1.8 div 導入 top 安裝前準備 上傳文件到服務器,x-ftp xshell登陸Centos 檢查機器名 修改機器名為:portal.cloud.local 方法一:零時設置,重啟後失效,

CentOS 7編譯安裝Mariadb-10.2.11

mariadb1.安裝開發環境安裝需要包:1yum install -y ncurses-devel openssl-devel zlib-devel ncurses-devel openssl gcc gcc-c++2.安裝cmake12345tar -zvxf cmake-3.8.0.tar.gz -C