1. 程式人生 > >Centos7+kafka+ELK6.5.x安裝搭建

Centos7+kafka+ELK6.5.x安裝搭建

ins cluster sea rec get 線程 boot load ast

Centos7+kafka+ELK6.5.x安裝搭建

1 數據的流向

數據源端使用logstash收集udp 514日誌輸出到kafka中做測試(可選擇filebeat,比較輕量級、對CPU負載不大、不占用太多資源;選擇rsyslog對 kafka的支持是 v8.7.0版本及之後版本)。如下流程:

logstash(udp 514) => kafka(zookeeper) 集群=> logstash(grok) => elasticsearch集群 => kibana

Logstash(grok)因每條日誌需做正則匹配比較消耗資源,所以中間加了kafka(zookeeper)集群做消息隊列。

2 數據源端配置

2.1 安裝Logstash(udp 514)

1) 安裝jdk

yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel

2) yum安裝logstash

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

cat >> /etc/yum.repos.d/logstash-6.x.repo << ‘EOF‘

[logstash-6.x]

name=Elastic repository for 6.x packages

baseurl=https://artifacts.elastic.co/packages/6.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

EOF

yum install -y logstash

3) 配置

修改內存大小

vim /etc/logstash/jvm.options

-Xms2g

-Xmx2g

cp /etc/logstash/logstash-sample.conf /etc/logstash/conf.d/logstash514.conf

vim /etc/logstash/conf.d/logstash514.conf

#############################

input {

syslog {

type => "syslog"

port => "514"

}

}

output {

kafka {

codec => "json" #一定要加上,不然輸出沒有host等字段

bootstrap_servers => "192.168.89.11:9092,192.168.89.12:9092,192.168.89.13:9092"

topic_id => "SyslogTopic"

}

}

#############################

4) 測試配置

/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -t

-t: 測試配置文件

--path.settings: 單獨測試需要指定配置文件路徑,否則會找不到配置

5) 啟動logstash並設置開機啟動(配置好zookeeper+kafka再啟動)

Logstash監聽小於1024的端口號使用logstash權限啟動會失敗,需要修改root權限啟動

vim /etc/systemd/system/logstash.service #修改以下兩項

##################

User=root

Group=root

##################

啟動服務

systemctl daemon-reload

systemctl start logstash

systemctl enable logstash

6) 可以使用logger生成日誌

例:logger -n 服務器IP "日誌信息"

3 收集端配置

3.1 安裝zookeeper+kafka集群

請看https://www.cnblogs.com/longBlogs/p/10340251.html

4 安裝logstash(grok)並配置重要數據寫入mysql數據庫

需要安裝logstash-output-jdbc輸出到mysql數據庫(請看https://www.cnblogs.com/longBlogs/p/10340252.html)

4.1 安裝logstash

安裝步驟請參考“數據源端配置”-“安裝Logstash(udp 514)”上的安裝步驟

4.2 配置

1)設置日誌匹配目錄和格式

mkdir -p /data/logstash/patterns # 創建額外格式目錄

自定義匹配類型

vim /data/logstash/patterns/logstash_grok

============================================

#num formats

num [1][0-9]{9,} #匹配1開頭至少10位的數字

=================================================

2)配置logstash(grok):

修改內存大小

vim /etc/logstash/jvm.options

-Xms2g

-Xmx2g

配置conf文件

cp /etc/logstash/logstash-sample.conf /etc/logstash/conf.d/logstash_syslog.conf

vim /etc/logstash/conf.d/logstash514.conf

#############################

input {

kafka {

bootstrap_servers => "192.168.89.11:9092,192.168.89.12:9092,192.168.89.13:9092"

topics => "SyslogTopic"

codec => "json" #一定要加上,不然輸出沒有host等字段

group_id => "logstash_kafka" #多個logstash消費要相同group_id,不然會重復消費

client_id => "logstash00" #client_id唯一

consumer_threads => 3 #線程數

}

}

filter {

grok {

patterns_dir => ["/data/logstash/patterns/"]

match => {

"message" => ".*?%{NUM:num}.*?"

}

}

}

#輸出到elasticsearch

#這裏不輸出到mysql ,把輸出到mysql註釋掉

output {

elasticsearch {

hosts => ["192.168.89.20:9200","192.168.89.21:9200"] #是集群可寫多個

index => "log-%{+YYYY.MM.dd}" # 按日期分index,-前面必須小寫

}

#jdbc {

#driver_jar_path => "/etc/logstash/jdbc/mysql-connector-java-5.1.47/mysql-connector-java-5.1.47-bin.jar"

#driver_class => "com.mysql.jdbc.Driver"

#connection_string => "jdbc:mysql://mysql服務器ip:端口/數據庫?user=數據庫用戶名&password=數據庫密碼"

#statement => [ "insert into 數據表 (TIME ,IP,MESSAGES) values (?,?,?)","%{@timestamp}" ,"%{host}","%{message}" ]

#}

}

################################################

3)啟動logstash並設置開機啟動(配置好elasticsearch再啟動)

如果Logstash監聽小於1024的端口號使用logstash權限啟動會失敗,需要修改root權限啟動

vim /etc/systemd/system/logstash.service #修改以下兩項

##################

User=root

Group=root

##################

啟動服務

systemctl daemon-reload

systemctl start logstash

systemctl enable logstash

5 elasticsearch數據庫端配置

5.1 安裝elasticsearch(192.168.89.20與192.168.89.21)

1) 安裝java環境

yum install –y java-1.8.0-openjdk java-1.8.0-openjdk-devel

2) 設置yum源

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

cat >> /etc/yum.repos.d/elasticsearch-6.x.repo << ‘EOF‘

[elasticsearch-6.x]

name=Elasticsearch repository for 6.x packages

baseurl=https://artifacts.elastic.co/packages/6.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

EOF

3) 安裝

yum install -y elasticsearch

4) 配置

vim /etc/elasticsearch/elasticsearch.yml

cluster.name: syslog_elasticsearch # 集群名稱,同一集群需要一致

node.name: es_89.20 # 節點名稱,同一集群不同主機不能一致

path.data: /data/elasticsearch # 數據存放目錄

path.logs: /data/elasticsearch/log # 日誌存放目錄

network.host: 0.0.0.0 # 綁定ip

discovery.zen.ping.unicast.hosts: ["192.168.89.20", "192.168.89.21"] # 集群成員,不指定host.master則是自由選舉

#discovery.zen.minimum_master_nodes: 1 # 這個參數默認值1,控制的是,一個節點需要看到的具有master節點資格的最小數量,然後才能在集群中做操作。官方的推薦值是(N/2)+1,其中N是具有master資格的節點的數量(情況是3,這個參數設置為2,但對於只有2個節點的情況,設置為2就有些問題了,一個節點DOWN掉後,你肯定連不上2臺服務器了,這點需要註意)。

修改了elasticsearch.yml的data、log等目錄,請在這裏也修改

#vim /usr/lib/systemd/system/elasticsearch.service

LimitMEMLOCK=infinity

#systemctl daemon-reload

5) 創建目錄並屬主、屬組為一個非root賬戶,否則啟動會有以下錯誤

mkdir -p /data/elasticsearch/log

出現報錯:

main ERROR Unable to create file /data/log/elasticsearch/syslog_elasticsearch.log java.io.IOException: Permission denied

將ElasticSearch的安裝目錄及其子目錄改為另外一個非root賬戶

chown -R elasticsearch:elasticsearch /data/elasticsearch/

6) 啟動

systemctl restart elasticsearch

systemctl enable elasticsearch

7) 測試

  • 查詢集群狀態方法1

curl -XGET ‘http://192.168.89.20:9200/_cat/nodes‘

技術分享圖片

後面添加?v代表詳細

curl -XGET ‘http://192.168.89.20:9200/_cat/nodes?v‘

技術分享圖片

  • 查詢集群狀態方法2

curl -XGET ‘http://192.168.89.20:9200/_cluster/state/nodes?pretty‘

技術分享圖片

  • 查詢集群健康狀況

技術分享圖片

6 分析展示端配置

6.1 安裝kibana

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

cat >> /etc/yum.repos.d/kibana-6.x.repo << ‘EOF‘

[kibana-6.x]

name=Kibana repository for 6.x packages

baseurl=https://artifacts.elastic.co/packages/6.x/yum

gpgcheck=1

gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

enabled=1

autorefresh=1

type=rpm-md

EOF

yum install -y kibana

配置文件

cat /etc/kibana/kibana.yml |egrep -v "^#|^$"

###############################################

server.port: 5601 #要使用5601端口,使用nginx轉80端口訪問

server.host: "0.0.0.0"

elasticsearch.url: "http://192.168.89.20:9200"

###############################################

systemctl start kibana

systemctl enable kibana

訪問

http://192.168.89.15:5601

漢化

先停止服務

systemctl stop kibana

做漢化

github上有漢化的項目,地址:https://github.com/anbai-inc/Kibana_Hanization

yum install unzip

解壓在kibana的安裝目錄

unzip Kibana_Hanization-master.zip

cd Kibana_Hanization-master

python main.py kibana安裝目錄(可以python main.py /

啟動服務

systemctl start kibana

6.2 安裝nginx

nginx主要作用是添加登錄密碼,因為kibana並沒有登錄功能,除非es開啟密碼。

1) 安裝

rpm -ivh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

yum install -y nginx

安裝Apache密碼生產工具:

yum install httpd-tools

2) 配置

生成密碼文件: 

mkdir -p /etc/nginx/passwd

htpasswd -c -b /etc/nginx/passwd/kibana.passwd admin admin

cp /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.backup

vim /etc/nginx/conf.d/default.conf

####################################

server {

listen 192.168.89.15:80;

server_name localhost;

auth_basic "Kibana Auth";

auth_basic_user_file /etc/nginx/passwd/kibana.passwd;

location / {

root /usr/share/nginx/html;

index index.html index.htm;

proxy_pass http://127.0.0.1:5601;

proxy_redirect off;

}

error_page 500 502 503 504 /50x.html;

location = /50x.html {

root /usr/share/nginx/html;

}

}

##################################

修改Kibana配置文件:

vim /etc/kibana/kibana.yml

server.host: "localhost"

重啟服務

systemctl restart kibana

systemctl restart nginx

systemctl enable nginx

訪問

http://192.168.89.15:80

Centos7+kafka+ELK6.5.x安裝搭建