ELK日誌分析系統實踐
官網:
https://www.elastic.co/cn/
中文指南
https://legacy.gitbook.com/book/chenryn/elk-stack-guide-cn/details
-
ELK Stack(5.0版本後)-->Elastic Stack相當於ELK Stack+Beats
-
ELK Stack包含:Elaticsearch、Logstash、Kibana
-
Elasticsearch是實時全文搜索和分析引擎,提供搜集、分析、存儲數據三大功能;是一套REST和JAVA API開放且提供高效搜索功能,可擴展的分布式系統。它構建於Apache Lucene搜索引擎庫之上。
-
Logstash用來采集(它支持幾乎任何類型的日誌,包括系統日誌、錯誤日誌和自定義應用程序日誌)日誌,把日誌解析為json格式交給ElasticSearch
-
Kibana是一個基於Web圖形界面,用於搜索、分析和可視化顯示存儲在 Elasticsearch指標中的日誌數據。它利用Elasticsearch的REST接口來檢索數據,不僅允許用戶創建他們自己的數據的定制儀表板視圖,還允許他們以特殊的方式查詢和過濾數據
-
Beats是一個輕量級日誌采集器,在早期的ELK架構中使用Logstash收集、解析日誌,但是Logstash對內存、CPU、IO等資源消耗比較高,和Beates相比,Beates占用系統CPU、內存基本上可以忽略不計
- x-pack對Elastic Stack提供了安全,警報,監控,報表於一身的擴展包,這塊組件是收費的,並非開源
二、ELK架構
三、ELK安裝
環境準備
1.配置節點互相解析 [root@node-11 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.71.11.1 node-1 10.71.11.2 node-2 10.71.11.11 node-11
2.每個節點安裝jdk
[root@node-11 ~]# yum install -y java-1.8.0-openjdk
查看jdk版本
[root@node-1 ~]# java -version
java version "1.8.0_161"
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
特別說明:目前logstash不支持java9
安裝Elasticsearch
註:三個節點都執行以下命令
導入key
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
配置yum源
[root@node-1 ~]# vi /etc/yum.repos.d/elastic.repo
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artitacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
更新緩存
yum makecache
考慮到軟件包下載速度較慢,采用rpm包安裝elasticsearch
rpm下載地址:
https://www.elastic.co/downloads/elasticsearch
把下載的rpm包上傳到節點且安裝
rpm -ivh elasticsearch-6.2.3.rpm
編輯/etc/elasticsearch/elasticsearch.yml,增加或者修改以下參數
##定義elk集群名字、節點名字
cluster.name: cluster_elk
node.name: node-1
node.master: true
node.data: false
#定義主機名IP和端口
network.host: 10.71.11.1
http.port: 9200
##定義集群節點
discovery.zen.ping.unicast.hosts: ["node-1","node-2","node-11"]
把node-1上的配置文件/etc/elasticsearch/elasticsearch.yml拷貝到node-2和node-11
[root@node-1 ~]# scp !$ node-2:/tmp/
scp /etc/elasticsearch/elasticsearch.yml node-2:/tmp/
elasticsearch.yml 100% 3001 3.6MB/s 00:00
[root@node-1 ~]# scp /etc/elasticsearch/elasticsearch.yml node-11:/tmp/
root@node-11‘s password:
elasticsearch.yml
[root@node-11 yum.repos.d]# cp /tmp/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml
cp: overwrite ‘/etc/elasticsearch/elasticsearch.yml’? y
[root@node-11 yum.repos.d]# vim /etc/elasticsearch/elasticsearch.yml
在node-2上編輯/etc/elasticsearch/elasticsearch.yml
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.name: cluster_elk
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
node.name: node-2
node.master: false
node.data: true
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 10.71.11.2
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
**discovery.zen.ping.unicast.hosts: ["node-1","node-2","node-11"]**
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
同理修改node-11上的/etc/elasticsearch/elasticsearch.yml 配置文件
在node-1上啟動elasticsearch
[root@node-1 ~]# systemctl start elasticsearch
[root@node-1 ~]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2018-04-12 21:11:28 CST; 12s ago
Docs: http://www.elastic.co
Main PID: 17297 (java)
Tasks: 67
Memory: 1.2G
CGroup: /system.slice/elasticsearch.service
└─17297 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPre...
Apr 12 21:11:28 node-1 systemd[1]: Started Elasticsearch.
Apr 12 21:11:28 node-1 systemd[1]: Starting Elasticsearch...
查看集群日誌
[root@node-1 ~]# tail -f /var/log/elasticsearch/cluster_elk.log
[2018-04-12T21:11:34,704][INFO ][o.e.d.DiscoveryModule ] [node-1] using discovery type [zen]
[2018-04-12T21:11:35,187][INFO ][o.e.n.Node ] [node-1] initialized
[2018-04-12T21:11:35,187][INFO ][o.e.n.Node ] [node-1] starting ...
[2018-04-12T21:11:35,370][INFO ][o.e.t.TransportService ] [node-1] publish_address {10.71.11.1:9300}, bound_addresses {10.71.11.1:9300}
[2018-04-12T21:11:35,380][INFO ][o.e.b.BootstrapChecks ] [node-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-04-12T21:11:38,423][INFO ][o.e.c.s.MasterService ] [node-1] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {node-1}{PVxBZmElTXOHkzavFVFEnA}{xsTmwB7MTwu-8cwwALyTPA}{10.71.11.1}{10.71.11.1:9300}
[2018-04-12T21:11:38,428][INFO ][o.e.c.s.ClusterApplierService] [node-1] new_master {node-1}{PVxBZmElTXOHkzavFVFEnA}{xsTmwB7MTwu-8cwwALyTPA}{10.71.11.1}{10.71.11.1:9300}, reason: apply cluster state (from master [master {node-1}{PVxBZmElTXOHkzavFVFEnA}{xsTmwB7MTwu-8cwwALyTPA}{10.71.11.1}{10.71.11.1:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-04-12T21:11:38,442][INFO ][o.e.h.n.Netty4HttpServerTransport] [node-1] publish_address {10.71.11.1:9200}, bound_addresses {10.71.11.1:9200}
[2018-04-12T21:11:38,442][INFO ][o.e.n.Node ] [node-1] started
[2018-04-12T21:11:38,449][INFO ][o.e.g.GatewayService ] [node-1] recovered [0] indices into cluster_state
在主節點查看集群健康狀態
[root@node-1 ~]# curl ‘10.71.11.1:9200/_cluster/health?pretty‘
{
"cluster_name" : "cluster_elk",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 0,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
在node-1點查看集群的詳細信息
[root@node-1 ~]# curl ‘10.71.11.1:9200/_cluster/state?pretty‘
{
"cluster_name" : "cluster_elk",
"compressed_size_in_bytes" : 226,
"version" : 2,
"state_uuid" : "-LLN7fEYQJiKZSLqitdOvQ",
"master_node" : "PVxBZmElTXOHkzavFVFEnA",
"blocks" : { },
"nodes" : {
"PVxBZmElTXOHkzavFVFEnA" : {
"name" : "node-1",
"ephemeral_id" : "xsTmwB7MTwu-8cwwALyTPA",
"transport_address" : "10.71.11.1:9300",
"attributes" : { }
}
},
"metadata" : {
"cluster_uuid" : "LaaRmRfRTfOY-ApuNz_nfA",
"templates" : { },
"indices" : { },
"index-graveyard" : {
"tombstones" : [ ]
}
},
"routing_table" : {
"indices" : { }
},
"routing_nodes" : {
"unassigned" : [ ],
"nodes" : { }
},
"snapshots" : {
"snapshots" : [ ]
},
"restore" : {
"snapshots" : [ ]
},
"snapshot_deletions" : {
"snapshot_deletions" : [ ]
}
}
安裝kibana
註:在node-1節點執行
yum install -y kibana
說明:使用yum安裝速度相對較慢,所以使用rpm包安裝
下載kibana-6.2.3-x86_64 .rpm並上傳到node-1節點安裝kibana
https://www.elastic.co/downloads/kibana
[root@node-1 ~]# rpm -ivh kibana-6.2.3-x86_64.rpm
Preparing... ################################# [100%]
package kibana-6.2.3-1.x86_64 is already installed
編輯/etc/kibana/kibana.yml
server.port : 5601 ##配置監聽端口,默認監聽5601端口
server.host: "10.71.11.1" ##配置服務主機名或者IP,需要註意的是,如果沒有安裝x-pack組件,就不能設置kibana登錄用戶和密碼,而這裏的IP又是配置公網IP的話,任何人都能登錄kibana,如果這裏配置的IP為內網IP和port,要保證能從公網能登錄kibana的話,可以使用nginxu做代理實現
elasticsearch.url: "http://10.71.11.1:9200" ##配置kibana和elasticsearch通信
logging.dest: /var/log/kibana.log ##默認情況下,kibana的日誌是在/var/log/message/下。也可以自定義kibana.log路徑/var/log/kibana.log
啟動kibana服務
[root@node-1 ~]# systemctl start kibana
[root@node-1 ~]# ps aux |grep kibana
kibana 650 109 0.0 944316 99684 ? Rsl 10:59 0:02 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root 659 0.0 0.0 112660 976 pts/6 S+ 10:59 0:00 grep --color=auto kib
在瀏覽器上訪問Kibana:http://10.71.11.1:5601/
安裝logstash
註:無特使說明,以下操作在node-2上完成
下載logstash-6.2.3 .rpm並上傳到node-2
https://www.elastic.co/downloads/logstash
安裝logstash服務
[root@node-2 ~]# ls logstash-6.2.3.rpm
logstash-6.2.3.rpm
[root@node-2 ~]# rpm -ivh logstash-6.2.3.rpm
Preparing... ################################# [100%]
Updating / installing...
1:logstash-1:6.2.3-1 ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash
配置logstash收集syslog日誌
編輯/etc/logstash/conf.d/syslog.conf
input{
syslog{
type =>"system-syslog"
port => 10514
}
}
output{
stdout{
codec=>rubydebug
}
}
檢測配置文件語法錯誤
[root@node-2 ~]# cd /usr/share/logstash/bin/
[root@node-2 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash‘s logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
參數說明:
--path.settings /etc/logstash/ 指定logstash配置文件路徑
-f 指定自定義的配置文件
檢查是否開啟10514監聽端口
編輯/etc/rsyslog.conf,在####RULES####添加下面的配置
[root@node-2 ~]# vi /etc/rsyslog.conf
*.* @@10.71.11.2:10514
執行logstash啟動命令後,命令行終端不會返回數據,這個和配置etc/logstash/conf.d/syslog.conf定義的函數有關
此時需要重新復制node-2的ssh終端,在新的ssh終端重啟rsyslog.service
[root@node-2 ~]# systemctl restart rsyslog.service
在新的ssh終端執行ssh node-2命令後
在另外一個node-2的ssh終端會看到有日誌信息輸出,說明配置logstash收集系統日誌成功
以下操作在node-2執行
編輯/etc/logstash/conf.d/syslog.conf
input{
syslog{
type =>"system-syslog"
port => 10514
}
}
output{
elasticsearch {
hosts => ["10.71.11.1:9200"]
index => "system-syslog-%{+YYY.MM}" ##定義索引
}
}
驗證配置文件語法是否錯誤
[root@node-2 ~]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
修改logstash目錄權限屬主和屬組
[root@node-2 bin]# chown -R logstash /var/lib/logstash
因為logstash服務過程需要一些時間,當服務啟動成功後,9600和10514端口都會被監聽
說明:logstash服務日誌路徑
/var/log/logstash/logstash-plain.log
在elasticsearch上查看數據索引
編輯node-2上的/etc/logstash/logstash.yml,添加
http.host: "10.71.11.2"
在node-1上執行下面命令獲取索引信息
[root@node-1 ~]# curl ‘10.71.11.1:9200/_cat/indices?v‘
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open system-syslog-2018.04 3Za0b5rBTYafhsxQ-A1P-g 5 1
說明:成功生成索引,說明es和logstash通信正常
獲取索引的詳細信息
[root@node-1 ~]# curl ‘10.71.11.1:9200/indexname?pretty‘
{
"error" : {
"root_cause" : [
{
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "indexname",
"index_uuid" : "_na_",
"index" : "indexname"
}
],
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "indexname",
"index_uuid" : "_na_",
"index" : "indexname"
},
"status" : 404
}
ELK日誌分析系統實踐