ELK之生產日誌收集構架(filebeat-logstash-redis-logstash-elasticsearch-kibana)
本次構架圖如下
說明:
1,前端服務器只啟動輕量級日誌收集工具filebeat(不需要JDK環境)
2,收集的日誌不進過處理直接發送到redis消息隊列
3,redis消息隊列只是暫時存儲日誌數據,不需要進行持久化
4,logstash從redis消息隊列讀取數據並且按照一定規則進行過濾然後存儲至elasticsearch
5,前端通過kibana進行圖形化暫時
環境查看
服務器客戶段安裝filebeat
rpm -ivh filebeat-6.2.4-x86_64.rpm
修改配置文件/etc/filebeat/filebeat.yml(本次以收集系統日誌及nginx訪問日誌為例)
filebeat.prospectors: - type: log enabled: true paths: - /var/log/*.log - /var/log/messages tags: ["system-log-5611"] - type: log enabled: true paths: - /data/logs/nginx/http-access.log tags: ["nginx-log"] filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 3 setup.kibana: output.logstash: hosts: ["localhost:5044"]
PS:系統日誌打一個tags 同理nginx日誌也打一個tags便於後面過濾
輸出至logstash(本次試驗logstash搭建在同一臺主機,生產是單獨的主機)
修改logstash配置文件/etc/logstash/conf.d/beat-redis.conf(這個logstash至進行日誌收集不進行任何處理根據tags的不同過濾放置在不同的redis庫)
為了便於排查錯誤還使用的標準輸出
input{ beats{ port => 5044 } } output{ if "system-log-5611" in [tags]{ redis { host => "192.168.56.11" port => "6379" password => "123456" db => "3" data_type => "list" key => "system-log-5611" } stdout{ codec => rubydebug } } if "nginx-log" in [tags]{ redis { host => "192.168.56.11" port => "6379" password => "123456" db => "4" data_type => "list" key => "nginx-log" } stdout{ codec => rubydebug } } }
啟動檢查配置是否正確
systemctl start filebeat /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/beat-redis.conf
輸出如下(同時輸出至redis了)
系統日誌輸出
nginx輸出(可以看到nginx的message的輸出格式為json但是logstash沒有進行處理)
配置redis主機過程不詳述
在另外一臺主機修改logstash配置文件用於從redis讀取日誌數據並且進行過濾後輸出至elasticsearch
redis-elastic.conf
input{ redis { host => "192.168.56.11" port => "6379" password => "123456" db => "3" data_type => "list" key => "system-log-5611" } redis { host => "192.168.56.11" port => "6379" password => "123456" db => "4" data_type => "list" key => "nginx-log" } } filter{ if "nginx-log" in [tags] { json{ source => "message" } if [user_ua] != "-" { useragent { target => "agent" #agent將過來出的user agent的信息配置到了單獨的字段中 source => "user_ua" #這個表示對message裏面的哪個字段進行分析 } } } } output{ stdout{ codec => rubydebug } }
PS:因為不同的日誌收集至不同的redis所以輸入有多個redis庫
因為nginx是json格式需要通過filter進行過濾輸出json格式
首先判斷tag是是否nginx日誌如果是則以json格式輸出,並且再次判斷客戶端信息如果不為空則再次使用useragent過濾出詳細的訪問客戶端信息
啟動查看輸出
系統日誌(輸出和收集日誌logstash的格式一樣)
nginx訪問日誌(輸出為json格式)
標準輸出沒有問題修改配置文件輸出至elasticsearch
input{ redis { host => "192.168.56.11" port => "6379" password => "123456" db => "3" data_type => "list" key => "system-log-5611" } redis { host => "192.168.56.11" port => "6379" password => "123456" db => "4" data_type => "list" key => "nginx-log" } } filter{ if "nginx-log" in [tags] { json{ source => "message" } if [user_ua] != "-" { useragent { target => "agent" #agent將過來出的user agent的信息配置到了單獨的字段中 source => "user_ua" #這個表示對message裏面的哪個字段進行分析 } } } } output{ if "nginx-log" in [tags]{ elasticsearch{ hosts => ["192.168.56.11:9200"] index => "nginx-log-%{+YYYY.MM}" } } if "system-log-5611" in [tags]{ elasticsearch{ hosts => ["192.168.56.11:9200"] index => "system-log-5611-%{+YYYY.MM}" } } }
刷新訪問日誌
通過head訪問查看
ELK之生產日誌收集構架(filebeat-logstash-redis-logstash-elasticsearch-kibana)