1. 程式人生 > >ELK實時日誌分析平臺環境部署

ELK實時日誌分析平臺環境部署

原本打算構建和實踐基於ELK的實時日誌分析平臺的,偶然發現此文,甚是詳細和實用,便轉載作以記錄。

另,近期在使用公司RDS日誌監控架構時瞭解到,使用的日誌收集工具為filebeat,百度發現filebeat和logstach同出一源,較後者更輕量,詳見https://www.zhihu.com/question/54058964。

在日常運維工作中,對於系統和業務日誌的處理尤為重要。今天,在這裡分享一下自己部署的ELK(+Redis)-開源實時日誌分析平臺的記錄過程(僅依據本人的實際操作為例說明,如有誤述,敬請指出)~


一、概念介紹
日誌主要包括系統日誌、應用程式日誌和安全日誌。系統運維和開發人員可以通過日誌瞭解伺服器軟硬體資訊、檢查配置過程中的錯誤及錯誤發生的原因。經常分析日誌可以瞭解伺服器的負荷,效能安全性,從而及時採取措施糾正錯誤。

通常,日誌被分散的儲存不同的裝置上。如果你管理數十上百臺伺服器,你還在使用依次登入每臺機器的傳統方法查閱日誌。這樣是不是感覺很繁瑣和效率低下。當務之急我們使用集中化的日誌管理,例如:開源的syslog,將所有伺服器上的日誌收集彙總。

集中化管理日誌後,日誌的統計和檢索又成為一件比較麻煩的事情,一般我們使用grep、awk和wc等Linux命令能實現檢索和統計,但是對於要求更高的查詢、排序和統計等要求和龐大的機器數量依然使用這樣的方法難免有點力不從心。

開源實時日誌分析ELK平臺能夠完美的解決我們上述的問題,ELK由ElasticSearch、Logstash和Kiabana三個開源工具組成:
1)ElasticSearch是一個基於Lucene的開源分散式搜尋伺服器。它的特點有:分散式,零配置,自動發現,索引自動分片,索引副本機制,restful風格介面,多資料來源,自動搜尋負載等。它提供了一個分散式多使用者能力的全文搜尋引擎,基於RESTful web介面。Elasticsearch是用Java開發的,並作為Apache許可條款下的開放原始碼釋出,是第二流行的企業搜尋引擎。設計用於雲端計算中,能夠達到實時搜尋,穩定,可靠,快速,安裝使用方便。 
在elasticsearch中,所有節點的資料是均等的。
2)Logstash是一個完全開源的工具,他可以對你的日誌進行收集、過濾、分析,並將其儲存供以後使用(如,搜尋),您可以使用它。說到搜尋,logstash帶有一個web介面,搜尋和展示所有日誌。
3)Kibana 是一個基於瀏覽器頁面的Elasticsearch前端展示工具,也是一個開源和免費的工具,它Kibana可以為 Logstash 和 ElasticSearch 提供的日誌分析友好的 Web 介面,可以幫助您彙總、分析和搜尋重要資料日誌。


ELK工作原理展示圖:






如上圖:Logstash收集AppServer產生的Log,並存放到ElasticSearch叢集中,而Kibana則從ES叢集中查詢資料生成圖表,再返回給Browser。


 


二、ELK環境部署
(0)基礎環境介紹
系統: Centos7.1
防火牆: 關閉
Sellinux: 關閉

機器環境: 兩臺
elk-node1: 192.168.1.160       #master機器
elk-node2:192.168.1.161      #slave機器

註明: 
master-slave模式:
master收集到日誌後,會把一部分資料碎片到salve上(隨機的一部分資料);同時,master和slave又都會各自做副本,並把副本放到對方機器上,這樣就保證了資料不會丟失。
如果master宕機了,那麼客戶端在日誌採集配置中將elasticsearch主機指向改為slave,就可以保證ELK日誌的正常採集和web展示。

=========================================================================================
由於elk-node1和elk-node2兩臺是虛擬機器,沒有外網ip,所以訪問需要通過宿主機進行代理轉發實現。

有以下兩種轉發設定:(任選其一)
通過訪問宿主機的19200,19201埠分別轉發到elk-node1,elk-node2的9200埠
通過訪問宿主機的15601埠轉發到elk-node1的5601埠
宿主機:112.110.115.10(內網ip為192.168.1.7)  (為了不讓線上的真實ip暴露,這裡任意給了一個ip做記錄)

a)通過宿主機的haproxy服務進行代理轉發,如下是宿主機上的代理配置:
[
[email protected]
conf]# pwd
/usr/local/haproxy/conf
[[email protected] conf]# cat haproxy.cfg
..........
..........
listen node1-9200 0.0.0.0:19200
mode tcp
option tcplog
balance roundrobin
server 192.168.1.160 192.168.1.160:9200 weight 1 check inter 1s rise 2 fall 2

listen node2-9200 0.0.0.0:19201
mode tcp
option tcplog
balance roundrobin
server 192.168.1.161 192.168.1.161:9200 weight 1 check inter 1s rise 2 fall 2

listen node1-5601 0.0.0.0:15601
mode tcp
option tcplog
balance roundrobin
server 192.168.1.160 192.168.1.160:5601 weight 1 check inter 1s rise 2 fall 2

重啟haproxy服務
[
[email protected]
conf]# /etc/init.d/haproxy restart

設定宿主機防火牆
[[email protected] conf]# cat /etc/sysconfig/iptables
.........
-A INPUT -p tcp -m state --state NEW -m tcp --dport 19200 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 19201 -j ACCEPT 
-A INPUT -p tcp -m state --state NEW -m tcp --dport 15601 -j ACCEPT

[[email protected] conf]# /etc/init.d/iptables restart

b)通過宿主機的NAT埠轉發實現

[[email protected] conf]# iptables -t nat -A PREROUTING -p tcp -m tcp --dport 19200 -j DNAT --to-destination 192.168.1.160:9200
[[email protected] conf]# iptables -t nat -A POSTROUTING -d 192.168.1.160/32 -p tcp -m tcp --sport 9200 -j SNAT --to-source 192.168.1.7
[[email protected] conf]# iptables -t filter -A INPUT -p tcp -m state --state NEW -m tcp --dport 19200 -j ACCEPT

[[email protected] conf]# iptables -t nat -A PREROUTING -p tcp -m tcp --dport 19201 -j DNAT --to-destination 192.168.1.161:9200
[[email protected] conf]# iptables -t nat -A POSTROUTING -d 192.168.1.161/32 -p tcp -m tcp --sport 9200 -j SNAT --to-source 192.168.1.7
[[email protected] conf]# iptables -t filter -A INPUT -p tcp -m state --state NEW -m tcp --dport 19201 -j ACCEPT

[[email protected] conf]# iptables -t nat -A PREROUTING -p tcp -m tcp --dport 15601 -j DNAT --to-destination 192.168.1.160:5601
[[email protected] conf]# iptables -t nat -A POSTROUTING -d 192.168.1.160/32 -p tcp -m tcp --sport 5601 -j SNAT --to-source 192.168.1.7
[[email protected] conf]# iptables -t filter -A INPUT -p tcp -m state --state NEW -m tcp --dport 15601 -j ACCEPT

[[email protected] conf]# service iptables save
[[email protected] conf]# service iptables restart

提醒一點:
nat埠轉發設定成功後,/etc/sysconfig/iptables檔案裡要註釋掉下面兩行!不然nat轉發會有問題!一般如上面在nat轉發規則設定好並save和restart防火牆之後就會自動在/etc/sysconfig/iptables檔案裡刪除掉下面兩行內容了。
[[email protected] conf]# vim /etc/sysconfig/iptables
..........
#-A INPUT -j REJECT --reject-with icmp-host-prohibited 
#-A FORWARD -j REJECT --reject-with icmp-host-prohibited
[[email protected] ~]# service iptables restart

=========================================================================================

(1). Elasticsearch安裝配置

基礎環境安裝(elk-node1和elk-node2同時操作)

1)下載並安裝GPG Key
[[email protected] ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

2)新增yum倉庫
[[email protected] ~]# vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1

3)安裝elasticsearch
[[email protected] ~]# yum install -y elasticsearch

4)安裝相關測試軟體
#提前先下載安裝epel源:epel-release-latest-7.noarch.rpm,否則yum會報錯:No Package.....
[[email protected] ~]# wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
[[email protected] ~]# rpm -ivh epel-release-latest-7.noarch.rpm
#安裝Redis
[[email protected] ~]# yum install -y redis
#安裝Nginx
[[email protected] ~]# yum install -y nginx
#安裝java
[[email protected] ~]# yum install -y java

安裝完java後,檢測
[[email protected] ~]# java -version
openjdk version "1.8.0_102"
OpenJDK Runtime Environment (build 1.8.0_102-b14)
OpenJDK 64-Bit Server VM (build 25.102-b14, mixed mode)

配置部署(下面先進行elk-node1的配置)
1)配置修改配置檔案
[[email protected] ~]# mkdir -p /data/es-data
[[email protected]ode1 ~]# vim /etc/elasticsearch/elasticsearch.yml                               【將裡面內容情況,配置下面內容】
cluster.name: huanqiu                            # 組名(同一個組,組名必須一致)
node.name: elk-node1                            # 節點名稱,建議和主機名一致
path.data: /data/es-data                         # 資料存放的路徑
path.logs: /var/log/elasticsearch/             # 日誌存放的路徑
bootstrap.mlockall: true                         # 鎖住記憶體,不被使用到交換分割槽去
network.host: 0.0.0.0                            # 網路設定
http.port: 9200                                    # 埠


2)啟動並檢視
[[email protected] ~]# chown -R elasticsearch.elasticsearch /data/
[[email protected] ~]# systemctl start elasticsearch
[[email protected] ~]# systemctl status elasticsearch
CGroup: /system.slice/elasticsearch.service
└─3005 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSI...


注意:上面可以看出elasticsearch設定的記憶體最小256m,最大1g


[[email protected] src]# netstat -antlp |egrep "9200|9300"
tcp6 0 0 :::9200 :::* LISTEN 3005/java 
tcp6 0 0 :::9300 :::* LISTEN 3005/java




然後通過web訪問(訪問的瀏覽器最好用google瀏覽器)


http://112.110.115.10:19200/


3)通過命令的方式檢視資料(在112.110.115.10宿主機或其他外網伺服器上檢視,如下)
[[email protected] src]# curl -i -XGET 'http://192.168.1.160:9200/_count?pretty' -d '{"query":{"match_all":{}}}'
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95
{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
}
}

這樣感覺用命令來檢視,特別的不爽。



4)接下來安裝外掛,使用外掛進行檢視~  (下面兩個外掛要在elk-node1和elk-node2上都要安裝)
4.1)安裝head外掛
----------------------------------------------------------------------------------------------------
a)外掛安裝方法一
[[email protected] src]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

b)外掛安裝方法二
首先下載head外掛,下載到/usr/loca/src目錄下
下載地址:https://github.com/mobz/elasticsearch-head

----------------------------------------------------------------
head外掛包百度雲盤下載:https://pan.baidu.com/s/1boBE0qj
提取密碼:ifj7
----------------------------------------------------------------

[[email protected] src]# unzip elasticsearch-head-master.zip
[[email protected] src]# ls
elasticsearch-head-master elasticsearch-head-master.zip

在/usr/share/elasticsearch/plugins目錄下建立head目錄
然後將上面下載的elasticsearch-head-master.zip解壓後的檔案都移到/usr/share/elasticsearch/plugins/head下
接著重啟elasticsearch服務即可!
[[email protected] src]# cd /usr/share/elasticsearch/plugins/
[[email protected] plugins]# mkdir head
[[email protected] plugins]# ls
head
[[email protected] plugins]# cd head
[[email protected] head]# cp -r /usr/local/src/elasticsearch-head-master/* ./
[[email protected] head]# pwd
/usr/share/elasticsearch/plugins/head


[[email protected] head]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[[email protected] head]# ll
total 40
-rw-r--r--. 1 elasticsearch elasticsearch 104 Sep 28 01:57 elasticsearch-head.sublime-project
-rw-r--r--. 1 elasticsearch elasticsearch 2171 Sep 28 01:57 Gruntfile.js
-rw-r--r--. 1 elasticsearch elasticsearch 3482 Sep 28 01:57 grunt_fileSets.js
-rw-r--r--. 1 elasticsearch elasticsearch 1085 Sep 28 01:57 index.html
-rw-r--r--. 1 elasticsearch elasticsearch 559 Sep 28 01:57 LICENCE
-rw-r--r--. 1 elasticsearch elasticsearch 795 Sep 28 01:57 package.json
-rw-r--r--. 1 elasticsearch elasticsearch 100 Sep 28 01:57 plugin-descriptor.properties
-rw-r--r--. 1 elasticsearch elasticsearch 5211 Sep 28 01:57 README.textile
drwxr-xr-x. 5 elasticsearch elasticsearch 4096 Sep 28 01:57 _site
drwxr-xr-x. 4 elasticsearch elasticsearch 29 Sep 28 01:57 src
drwxr-xr-x. 4 elasticsearch elasticsearch 66 Sep 28 01:57 test


[[email protected] _site]# systemctl restart elasticsearch
-----------------------------------------------------------------------------------------------------


外掛訪問(最好提前將elk-node2節點的配置和外掛都安裝後,再來進行訪問和資料插入測試)
http://112.110.115.10:19200/_plugin/head/




先插入資料例項,測試下
如下:開啟”複合查詢“,在POST選項下,任意輸入如/index-demo/test,然後在下面輸入資料(注意內容之間換行的逗號不要漏掉);
資料輸入好之後(如下輸入wangshibo;hello world內容),下面點選”驗證JSON“->”提交請求“,提交成功後,觀察右欄裡出現的資訊:有index,type,version等資訊,failed:0(成功訊息)




再檢視測試例項,如下:
"複合查詢"下,選擇GET選項,在/index-demo/test/後面輸入上面POST結果中的id號,不輸入內容,即{}括號裡為空!
然後點選”驗證JSON“->"提交請求",觀察右欄內就有了上面插入的資料了(即wangshibo,hello world)




開啟"基本查詢",檢視下資料,如下,即可查詢到上面插入的資料:



開啟“資料瀏覽”,也能檢視到插入的資料:


 
如下:一定要提前在elk-node2節點上也完成配置(配置內容在下面提到),否則上面插入資料後,叢集狀態會呈現黃色yellow狀態,elk-node2完成配置加入到叢集裡後就會恢復到正常的綠色狀態。



4.2)安裝kopf監控外掛
--------------------------------------------------------------------------------------------------------------------

a)監控外掛安裝方法一
[[email protected] src]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf


b)監控外掛安裝方法二

首先下載監控外掛kopf,下載到/usr/loca/src目錄下
下載地址:https://github.com/lmenezes/elasticsearch-kopf

----------------------------------------------------------------
kopf外掛包百度雲盤下載:https://pan.baidu.com/s/1qYixSL2
提取密碼:ya4t
----------------------------------------------------------------

[[email protected] src]# unzip elasticsearch-kopf-master.zip
[[email protected] src]# ls
elasticsearch-kopf-master elasticsearch-kopf-master.zip

在/usr/share/elasticsearch/plugins目錄下建立kopf目錄
然後將上面下載的elasticsearch-kopf-master.zip解壓後的檔案都移到/usr/share/elasticsearch/plugins/kopf下
接著重啟elasticsearch服務即可!
[[email protected] src]# cd /usr/share/elasticsearch/plugins/
[[email protected] plugins]# mkdir kopf
[[email protected] plugins]# cd kopf
[[email protected] kopf]# cp -r /usr/local/src/elasticsearch-kopf-master/* ./
[[email protected] kopf]# pwd
/usr/share/elasticsearch/plugins/kopf

[[email protected] kopf]# chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/plugins
[[email protected] kopf]# ll
total 40
-rw-r--r--. 1 elasticsearch elasticsearch 237 Sep 28 16:28 CHANGELOG.md
drwxr-xr-x. 2 elasticsearch elasticsearch 22 Sep 28 16:28 dataset
drwxr-xr-x. 2 elasticsearch elasticsearch 73 Sep 28 16:28 docker
-rw-r--r--. 1 elasticsearch elasticsearch 4315 Sep 28 16:28 Gruntfile.js
drwxr-xr-x. 2 elasticsearch elasticsearch 4096 Sep 28 16:28 imgs
-rw-r--r--. 1 elasticsearch elasticsearch 1083 Sep 28 16:28 LICENSE
-rw-r--r--. 1 elasticsearch elasticsearch 1276 Sep 28 16:28 package.json
-rw-r--r--. 1 elasticsearch elasticsearch 102 Sep 28 16:28 plugin-descriptor.properties
-rw-r--r--. 1 elasticsearch elasticsearch 3165 Sep 28 16:28 README.md
drwxr-xr-x. 6 elasticsearch elasticsearch 4096 Sep 28 16:28 _site
drwxr-xr-x. 4 elasticsearch elasticsearch 27 Sep 28 16:28 src
drwxr-xr-x. 4 elasticsearch elasticsearch 4096 Sep 28 16:28 tests

[[email protected] _site]# systemctl restart elasticsearch
-----------------------------------------------------------------------------------------------------

訪問外掛:(如下,同樣要提前安裝好elk-node2節點上的外掛,否則訪問時會出現叢集節點為黃色的yellow告警狀態)

http://112.110.115.10:19200/_plugin/kopf/#!/cluster



*************************************************************************
下面進行節點elk-node2的配置  (如上的兩個外掛也在elk-node2上同樣安裝)


註釋:其實兩個的安裝配置基本上是一樣的。


[[email protected] src]# mkdir -p /data/es-data 
[[email protected] ~]# cat /etc/elasticsearch/elasticsearch.yml
cluster.name: huanqiu 
node.name: elk-node2
path.data: /data/es-data 
path.logs: /var/log/elasticsearch/ 
bootstrap.mlockall: true 
network.host: 0.0.0.0 
http.port: 9200 
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["192.168.1.160", "192.168.1.161"]


# 修改許可權配置
[[email protected] src]# chown -R elasticsearch.elasticsearch /data/


# 啟動服務
[[email protected] src]# systemctl start elasticsearch
[[email protected] src]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2016-09-28 16:49:41 CST; 1 weeks 3 days ago
Docs: http://www.elastic.co
Process: 17798 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 17800 (java)
CGroup: /system.slice/elasticsearch.service
└─17800 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFra...


Oct 09 13:42:22 elk-node2 elasticsearch[17800]: [2016-10-09 13:42:22,295][WARN ][transport ] [elk-node2] Transport res...943817]
Oct 09 13:42:23 elk-node2 elasticsearch[17800]: [2016-10-09 13:42:23,111][WARN ][transport ] [elk-node2] Transport res...943846]
................
................


# 檢視埠
[[email protected] src]# netstat -antlp|egrep "9200|9300"
tcp6 0 0 :::9200 :::* LISTEN 2928/java 
tcp6 0 0 :::9300 :::* LISTEN 2928/java 
tcp6 0 0 127.0.0.1:48200 127.0.0.1:9300 TIME_WAIT - 
tcp6 0 0 ::1:41892 ::1:9300 TIME_WAIT -
*************************************************************************


通過命令的方式檢視elk-node2資料(在112.110.115.10宿主機或其他外網伺服器上檢視,如下)
[[email protected] ~]# curl -i -XGET 'http://192.168.1.161:9200/_count?pretty' -d '{"query":{"match_all":{}}}'
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95


{
"count" : 1,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
}

然後通過web訪問elk-node2
http://112.110.115.10:19201/

訪問兩個外掛:
http://112.110.115.10:19201/_plugin/head/
http://112.110.115.10:19201/_plugin/kopf/#!/cluster

這裡幾圖和elk-node1基本相同,略。 


 (2)Logstash安裝配置(這個在客戶機上是要安裝的。elk-node1和elk-node2都安裝)

基礎環境安裝(客戶端安裝logstash,收集到的資料寫入到elasticsearch裡,就可以登陸logstash介面檢視到了)

1)下載並安裝GPG Key
[[email protected] ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

2)新增yum倉庫
[[email protected] ~]# vim /etc/yum.repos.d/logstash.repo
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1


3)安裝logstash
[[email protected] ~]# yum install -y logstash


4)logstash啟動
[[email protected] ~]# systemctl start elasticsearch
[[email protected] ~]# systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2016-11-07 18:33:28 CST; 3 days ago
Docs: http://www.elastic.co
Main PID: 8275 (java)
CGroup: /system.slice/elasticsearch.service
└─8275 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFrac...
..........
..........


資料的測試
1)基本的輸入輸出
[[email protected] ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
Settings: Default filter workers: 1
Logstash startup completed
hello                                                                                     #輸入這個
2016-11-11T06:41:07.690Z elk-node1 hello                        #輸出這個
wangshibo                                                                            #輸入這個
2016-11-11T06:41:10.608Z elk-node1 wangshibo               #輸出這個

2)使用rubydebug詳細輸出
[[email protected] ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug} }'
Settings: Default filter workers: 1
Logstash startup completed
hello                                                                                    #輸入這個
{                                                                                         #輸出下面資訊
           "message" => "hello",
           "@version" => "1",
      "@timestamp" => "2016-11-11T06:44:06.711Z",
                  "host" => "elk-node1"
}
wangshibo                                                                         #輸入這個
{                                                                                       #輸出下面資訊
         "message" => "wangshibo",
        "@version" => "1",
   "@timestamp" => "2016-11-11T06:44:11.270Z",
               "host" => "elk-node1"
}


3) 把內容寫到elasticsearch中
[[email protected] ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.1.160:9200"]} }'
Settings: Default filter workers: 1
Logstash startup completed                       #輸入下面的測試資料
123456 
wangshibo
huanqiu
hahaha


使用rubydebug和寫到elasticsearch中的區別:其實就在於後面標準輸出的區別,前者使用codec;後者使用elasticsearch

寫到elasticsearch中在logstash中檢視,如下圖:

注意:
master收集到日誌後,會把一部分資料碎片到salve上(隨機的一部分資料),master和slave又都會各自做副本,並把副本放到對方機器上,這樣就保證了資料不會丟失。如下,master收集到的資料放到了自己的第1,3分片上,其他的放到了slave的第0,2,4分片上









4)即寫到elasticsearch中又寫在檔案中一份
[[email protected] ~]# /opt/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["192.168.1.160:9200"]} stdout{ codec => rubydebug}}'
Settings: Default filter workers: 1
Logstash startup completed
huanqiupc
{
           "message" => "huanqiupc",
          "@version" => "1",
     "@timestamp" => "2016-11-11T07:27:42.012Z",
                 "host" => "elk-node1"
}
wangshiboqun
{
         "message" => "wangshiboqun",
        "@version" => "1",
   "@timestamp" => "2016-11-11T07:27:55.396Z",
               "host" => "elk-node1"
}


以上文字可以長期保留、操作簡單、壓縮比大。下面登陸elasticsearch介面中檢視;


 


 logstash的配置和檔案的編寫

1)logstash的配置
簡單的配置方式:
[[email protected] ~]# vim /etc/logstash/conf.d/01-logstash.conf
input { stdin { } }
output {
        elasticsearch { hosts => ["192.168.1.160:9200"]}
        stdout { codec => rubydebug }
}

它的執行:
[[email protected] ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/01-logstash.conf
Settings: Default filter workers: 1
Logstash startup completed
beijing                                                #輸入內容
{                                                       #輸出下面資訊
             "message" => "beijing",
            "@version" => "1",
       "@timestamp" => "2016-11-11T07:41:48.401Z",
                   "host" => "elk-node1"
}


--------------------------------------------------------------------------------------------------
參考內容:
https://www.elastic.co/guide/en/logstash/current/configuration.html 
https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html
--------------------------------------------------------------------------------------------------





2)收集系統日誌

[[email protected] ~]# vim  file.conf
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }
}
 
output {
    elasticsearch {
       hosts => ["192.168.1.160:9200"]
       index => "system-%{+YYYY.MM.dd}"
    }
}
 


執行上面日誌資訊的收集,如下,這個命令會一直在執行中,表示日誌在監控收集中;如果中斷,就表示日誌不在收集!所以需要放在後臺執行~
[[email protected] ~]# /opt/logstash/bin/logstash -f file.conf &

登陸elasticsearch介面,檢視本機系統日誌的資訊:



--------------------------------------------------------------------------------------------------
參考內容:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html
--------------------------------------------------------------------------------------------------


3)收集java日誌,其中包含上面講到的日誌收集


[[email protected] ~]# vim  file.conf
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }
}
 
input {
    file {
       path => "/var/log/elasticsearch/huanqiu.log"
       type => "es-error"
       start_position => "beginning"
    }
}
 
 
output {
 
    if [type] == "system"{
        elasticsearch {
           hosts => ["192.168.1.160:9200"]
           index => "system-%{+YYYY.MM.dd}"
        }
    }
 
    if [type] == "es-error"{
        elasticsearch {
           hosts => ["192.168.1.160:9200"]
           index => "es-error-%{+YYYY.MM.dd}"
        }
    }
}
注意:
如果你的日誌中有type欄位 那你就不能在conf檔案中使用type


執行如下命令收集:
[[email protected] ~]# /opt/logstash/bin/logstash -f file.conf &


登陸elasticsearch介面,檢視資料:




--------------------------------------------------------------------------------------------------
參考內容:
https://www.elastic.co/guide/en/logstash/current/event-dependent-configuration.html
--------------------------------------------------------------------------------------------------


---------------
有個問題: 
每個報錯都給收整合一行了,不是按照一個報錯,一個事件模組收集的。


下面將行換成事件的方式展示:

[[email protected] ~]# vim multiline.conf
input {
    stdin {
       codec => multiline {
          pattern => "^\["
          negate => true
          what => "previous"
        }
    }
}
output {
    stdout {
      codec => "rubydebug"
     } 
}
執行命令:

[[email protected] ~]# /opt/logstash/bin/logstash -f multiline.conf
Settings: Default filter workers: 1
Logstash startup completed
123
456
[123
{
    "@timestamp" => "2016-11-11T09:28:56.824Z",
       "message" => "123\n456",
      "@version" => "1",
          "tags" => [
        [0] "multiline"
    ],
          "host" => "elk-node1"
}
123]
[456]
{
    "@timestamp" => "2016-11-11T09:29:09.043Z",
       "message" => "[123\n123]",
      "@version" => "1",
          "tags" => [
        [0] "multiline"
    ],
          "host" => "elk-node1"
}
在沒有遇到[的時候,系統不會收集,只有遇見[的時候,才算是一個事件,才收集起來。
--------------------------------------------------------------------------------------------------
參考內容
https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html
--------------------------------------------------------------------------------------------------



(3)Kibana安裝配置
1)kibana的安裝:
[[email protected] ~]# cd /usr/local/src
[[email protected] src]# wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
[[email protected] src]# tar zxf kibana-4.3.1-linux-x64.tar.gz
[[email protected] src]# mv kibana-4.3.1-linux-x64 /usr/local/
[[email protected] src]# ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana

2)修改配置檔案:
[[email protected] config]# pwd
/usr/local/kibana/config
[[email protected] config]# cp kibana.yml kibana.yml.bak
[[email protected] config]# vim kibana.yml 
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.1.160:9200"
kibana.index: ".kibana"


因為他一直執行在前臺,要麼選擇開一個視窗,要麼選擇使用screen。
安裝並使用screen啟動kibana:
[[email protected] ~]# yum -y install screen
[[email protected] ~]# screen                          #這樣就另開啟了一個終端視窗
[[email protected] ~]# /usr/local/kibana/bin/kibana
log [18:23:19.867] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
log [18:23:19.911] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [18:23:19.941] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
log [18:23:19.953] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
log [18:23:19.963] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
log [18:23:19.995] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
log [18:23:20.004] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
log [18:23:20.010] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready


然後按ctrl+a+d組合鍵,這樣在上面另啟的screen屏裡啟動的kibana服務就一直執行在前臺了....
[[email protected] ~]# screen -ls
There is a screen on:
15041.pts-0.elk-node1 (Detached)
1 Socket in /var/run/screen/S-root.


(3)訪問kibana:http://112.110.115.10:15601/
如下,如果是新增上面設定的java日誌收集資訊,則在下面填寫es-error*;如果是新增上面設定的系統日誌資訊system*,以此型別(可以從logstash介面看到日誌收集項)

 然後點選上面的Discover,在Discover中檢視:



檢視日誌登陸,需要點選“Discover”-->"message",點選它後面的“add”
注意:
需要右邊檢視日誌內容時帶什麼屬性,就在左邊點選相應屬性後面的“add”
如下圖,添加了message和path的屬性:


   

這樣,右邊顯示的日誌內容的屬性就帶了message和path

點選右邊日誌內容屬性後面隱藏的<<,就可將內容向前縮排



新增新的日誌採集項,點選Settings->+Add New,比如新增system系統日誌。注意後面的*不要忘了。






 

刪除kibana裡的日誌採集項,如下,點選刪除圖示即可。
 


如果開啟kibana檢視日誌,發現沒有日誌內容,出現“No results found”,如下圖所示,這說明要檢視的日誌在當前時間沒有日誌資訊輸出,可以點選右上角的時間鍾來除錯日誌資訊的檢視。





4)收集nginx的訪問日誌

修改nginx的配置檔案,分別在nginx.conf的http和server配置區域新增下面內容:

##### http 標籤中
          log_format json '{"@timestamp":"$time_iso8601",'
                           '"@version":"1",'
                           '"client":"$remote_addr",'
                           '"url":"$uri",'
                           '"status":"$status",'
                           '"domain":"$host",'
                           '"host":"$server_addr",'
                           '"size":$body_bytes_sent,'
                           '"responsetime":$request_time,'
                           '"referer": "$http_referer",'
                           '"ua": "$http_user_agent"'
'}';
##### server標籤中
            access_log /var/log/nginx/access_json.log json;

截圖如下:





啟動nginx服務:

[[email protected] ~]# systemctl start nginx
[[email protected] ~]# systemctl status nginx
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Fri 2016-11-11 19:06:55 CST; 3s ago
  Process: 15119 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
  Process: 15116 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
  Process: 15114 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
 Main PID: 15122 (nginx)
   CGroup: /system.slice/nginx.service
           ├─15122 nginx: master process /usr/sbin/nginx
           ├─15123 nginx: worker process
           └─15124 nginx: worker process
 
Nov 11 19:06:54 elk-node1 systemd[1]: Starting The nginx HTTP and reverse proxy server...
Nov 11 19:06:55 elk-node1 nginx[15116]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
Nov 11 19:06:55 elk-node1 nginx[15116]: nginx: configuration file /etc/nginx/nginx.conf test is successful
Nov 11 19:06:55 elk-node1 systemd[1]: Started The nginx HTTP and reverse proxy server.

編寫收集檔案
這次使用json的方式收集:

[[email protected] ~]# vim json.conf
input {
   file {
      path => "/var/log/nginx/access_json.log"
      codec => "json"
   }
}
 
output {
   stdout {
      codec => "rubydebug"
   }
}
啟動日誌收集程式:
[[email protected] ~]# /opt/logstash/bin/logstash -f json.conf        #或加個&放在後臺執行


訪問nginx頁面(在elk-node1的宿主機上執行訪問頁面的命令:curl http://192.168.1.160)就會出現以下內容:

[[email protected] ~]# /opt/logstash/bin/logstash -f json.conf
Settings: Default filter workers: 1
Logstash startup completed
{
      "@timestamp" => "2016-11-11T11:10:53.000Z",
        "@version" => "1",
          "client" => "192.168.1.7",
             "url" => "/index.html",
          "status" => "200",
          "domain" => "192.168.1.160",
            "host" => "192.168.1.160",
            "size" => 3700,
    "responsetime" => 0.0,
         "referer" => "-",
              "ua" => "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2",
            "path" => "/var/log/nginx/access_json.log"
}
注意:
上面的json.conf配置只是將nginx日誌輸出,還沒有輸入到elasticsearch裡,所以這個時候在elasticsearch介面裡是採集不到nginx日誌的。


需要配置一下,將nginx日誌輸入到elasticsearch中,將其彙總到總檔案file.conf裡,如下也將nginx-log日誌輸入到elasticserach裡:(後續就可以只用這個彙總檔案,把要追加的日誌彙總到這個總檔案裡即可)

[[email protected] ~]# cat file.conf
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }
 
    file {
       path => "/var/log/elasticsearch/huanqiu.log"
       type => "es-error"
       start_position => "beginning"
       codec => multiline {
           pattern => "^\["
           negate => true
           what => "previous"
       }
    }
    file {
       path => "/var/log/nginx/access_json.log"
       codec => json
       start_position => "beginning"
       type => "nginx-log"
    }
}
 
 
output {
 
    if [type] == "system"{
        elasticsearch {
           hosts => ["192.168.1.160:9200"]
           index => "system-%{+YYYY.MM.dd}"
        }
    }
 
    if [type] == "es-error"{
        elasticsearch {
           hosts => ["192.168.1.160:9200"]
           index => "es-error-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "nginx-log"{
        elasticsearch {
           hosts => ["192.168.1.160:9200"]
           index => "nignx-log-%{+YYYY.MM.dd}"
        }
    }
}

可以加上--configtest引數,測試下配置檔案是否有語法錯誤或配置不當的地方,這個很重要!!
[[email protected] ~]# /opt/logstash/bin/logstash -f file.conf --configtest
Configuration OK

然後接著執行logstash命令(由於上面已經將這個執行命令放到了後臺,所以這裡其實不用執行,也可以先kill之前的,再放後臺執行),然後可以再訪問nginx介面測試下
[[email protected] ~]# /opt/logstash/bin/logstash -f file.conf &

登陸elasticsearch介面檢視:


 將nginx日誌整合到kibana介面裡,如下:




5)收集系統日誌


編寫收集檔案並執行。

[[email protected] ~]# cat syslog.conf
input {
    syslog {
        type => "system-syslog"
        host => "192.168.1.160"
        port => "514"
    }
}
 
output {
    stdout {
        codec => "rubydebug"
    }
}
對上面的採集檔案進行執行:
[[email protected] ~]# /opt/logstash/bin/logstash -f syslog.conf


重新開啟一個視窗,檢視服務是否啟動:
[[email protected] ~]# netstat -ntlp|grep 514
tcp6 0 0 192.168.1.160:514 :::* LISTEN 17842/java 
[[email protected] ~]# vim /etc/rsyslog.conf
#*.* @@remote-host:514                                                           【在此行下面新增如下內容】
*.* @@192.168.1.160:514


[[email protected] ~]# systemctl restart rsyslog


回到原來的視窗(即上面採集檔案的執行終端),就會出現資料:

[[email protected] ~]# /opt/logstash/bin/logstash -f syslog.conf
Settings: Default filter workers: 1
Logstash startup completed
{
           "message" => "Stopping System Logging Service...\n",
          "@version" => "1",
        "@timestamp" => "2016-11-13T10:35:30.000Z",
              "type" => "system-syslog",
              "host" => "192.168.1.160",
          "priority" => 30,
         "timestamp" => "Nov 13 18:35:30",
         "logsource" => "elk-node1",
           "program" => "systemd",
          "severity" => 6,
          "facility" => 3,
    "facility_label" => "system",
    "severity_label" => "Informational"
}
........
........
再次新增到總檔案file.conf中:

[[email protected] ~]# cat file.conf
input {
    file {
      path => "/var/log/messages"
      type => "system"
      start_position => "beginning"
    }
 
    file {
       path => "/var/log/elasticsearch/huanqiu.log"
       type => "es-error"
       start_position => "beginning"
       codec => multiline {
           pattern => "^\["
           negate => true
           what => "previous"
       }
    }
    file {
       path => "/var/log/nginx/access_json.log"
       codec => json
       start_position => "beginning"
       type => "nginx-log"
    }
    syslog {
        type => "system-syslog"
        host => "192.168.1.160"
        port => "514"
    }
}
 
 
output {
 
    if [type] == "system"{
        elasticsearch {
           hosts => ["192.168.1.160:9200"]
           index => "system-%{+YYYY.MM.dd}"
        }
    }
 
    if [type] == "es-error"{
        elasticsearch {
           hosts => ["192.168.1.160:9200"]
           index => "es-error-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "nginx-log"{
        elasticsearch {
           hosts => ["192.168.1.160:9200"]
           index => "nignx-log-%{+YYYY.MM.dd}"
        }
    }
    if [type] == "system-syslog"{
        elasticsearch {
           hosts => ["192.168.1.160:9200"]
           index => "system-syslog-%{+YYYY.MM.dd}"
        }
    }
}
執行總檔案(先測試下總檔案配置是否有誤,然後先kill之前在後臺啟動的file.conf檔案,再次執行):
[[email protected] ~]# /opt/logstash/bin/logstash -f file.conf --configtest
Configuration OK
[[email protected] ~]# /opt/logstash/bin/logstash -f file.conf &


測試:
向日志中新增資料,看elasticsearch和kibana的變化:
[[email protected] ~]# logger "hehehehehehe1"
[[email protected] ~]# logger "hehehehehehe2"
[[email protected] ~]