1. 程式人生 > >Docker搭建ElasticSearch+Redis+Logstash+Filebeat日誌分析系統

Docker搭建ElasticSearch+Redis+Logstash+Filebeat日誌分析系統

root cli tab 基本 步驟 stdout ood beginning disco

一、系統的基本架構

  在以前的博客中有介紹過在物理機上搭建ELK日誌分析系統,有興趣的朋友可以看一看-------------->>鏈接戳我<<。這篇博客將介紹如何使用Docker來更方便快捷的搭建,架構圖如下:

技術分享圖片

  說明:WEB服務器代表收集日誌的對象,由Filebeat收集日誌後發送給Logstash2,再由Logstash2發送至Redis消息隊列,然後由Redis發送至Logstash1,最後由ElasticSearch收集處理並由Kibana進行可視化顯示。這裏之所以需要兩層Logstash是因為WEB服務器可以是任何服務器,可能會出現多個不同日誌需要分析。這個架構可以用在較大規模的集群中,在生產中可以將各個功能單點進行擴展,例如將Redis和Logstash集群化。

二、Docker搭建ES集群

  默認系統已安裝好docker,創建ES集群docker掛載目錄,編輯配置文件:

~]# mkdir -pv /root/elk/{logs,data,conf}
vim /root/elk/conf/elasticsearch.yml
cluster.name: es5.6-cluster #集群識別名,在一個集群中必須同名
node.name: node1 #節點標識
network.host: 192.168.29.115 #節點IP
http.port: 9200 #監聽端口
discovery.zen.ping.unicast.hosts: ["
192.168.29.115", "192.168.29.116"] #集群節點 http.cors.enabled: true http.cors.allow-origin: "*"
~]# docker container run --name es5.6 --network host  -v /root/elk/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /root/elk/data/:/usr/share/elasticsearch/data/ -v /root/elk/logs/:/usr/share/elasticsearch/logs/  
-p 9200:9200 -p 9300:9300 -d --rm docker.io/elasticsearch:5.6

  安裝ElasticSearch-head,查看ES集群是否安裝成功:

~]# docker container run --name es-head -p 9100:9100 -d --rm mobz/elasticsearchhead:5

技術分享圖片

三、Docker安裝Filebeat

  編輯Filebeat配置文件,把配置文件、log掛載至docker卷,,定義要送至Logstash的log文件與輸出目標:

vim /root/filebeat/conf/filebeat.yml
filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/messages
  exclude_lines: ["^DBG"]
  document_type: system-log-0019
- input_type: log
  paths:
    - /var/log/nginx/access.log
  document_type: nginx-log-0019 #相對的是logstash中的type,可以在Logstash中做if判斷
output.logstash:
  hosts: ["192.168.29.119:5044","192.168.29.119:5045"]
  enabled: true
  worker: 1
  compression_level: 3
  loadbalance: true
~]# docker container run --name filebeat --network host -v /root/filebeat/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /var/log/messages:/var/log/messages -v /var/log/nginx/access.log:/var/log/nginx/access.log  --rm -d docker.elastic.co/beats/filebeat:5.6.15 filebeat  -c /usr/share/filebeat/filebeat.yml

四、Docker安裝Logstash

docker container run --name logstash -p 9600:9600  -v /var/log/nginx/access.log:/var/log/nginx/access.log -v /var/log/messages:/var/log/messages  -v /root/logstash/pipeline/stdout.conf:/etc/logstash/conf.d/stdout.conf -d  --network host -it --rm docker.io/logstash:5.6  -f /etc/logstash/conf.d/stdout.conf

  Logstash收集多個日誌並輸出示例(與此架構無關,只是示範示例):

vim /root/logstash/pipeline/stdout.conf
input {
        file {
        type => "nginxaccesslog"
        path => "/var/log/nginx/access.log"
        start_position => "beginning"
        }
        file {
        type => "syslog"
        path => "/var/log/messages"
        start_position => "beginning"
        }
}
output {
        if [type] == "nginxaccesslog" {
        elasticsearch {
        hosts => ["192.168.29.115:9200"]
        index => "nginx-log-0018-%{+YYYY.MM.dd}"
        }
                                }
        if [type] == "syslog" {
        elasticsearch {
        hosts => ["192.168.29.115:9200"]
        index => "syslog-0018-%{+YYYY.MM.dd}"
        }
                        }
}

  配置Logstash2上的輸入輸出配置:

  這裏先進行測試,用Filebeat收集WEB服務器上的 /var/log/messages/var/log/nginx/access.log 日誌並進行標準輸出,配置如下:

vim /root/logstash/conf/stdout.conf
input {
        beats {
        port => 5044
        codec => "json"
        }
        beats {
        port => 5045
        codec => "json"
        }
}
output {
        stdout {
        codec => "rubydebug"
        }
}

  啟動Logstash2並查看是否可讀取日誌並輸出:

docker container run --name logstash -p 9600:9600  -v /root/logstash/conf/stdout.conf:/etc/logstash/conf.d/stdout.conf -it  \
--network host --rm docker.io/logstash:5.6 -f /etc/logstash/conf.d/stdout.conf

  如截圖所示便是正常:

技術分享圖片

  可以看到/var/log/messages與/var/log/nginx/access.log都能正常收集並輸出。

五、Docker安裝Redis

  修改配置:

vim /root/redis/conf/redis.conf
bind 0.0.0.0
port 6379
requirepass 123456 #配置密碼
save ""
pidfile /var/run/redis/redis.pid
logfile /var/log/redis/redis.log #註意權限

  運行:

~]# docker container run --name redis -v /root/redis/conf/redis.conf:/usr/local/etc/redis/redis.conf -v /root/redis/log/redis.log:/var/log/redis/redis.log -v /root/redis/run/redis.pid:/var/run/redis/redis.pid -v /root/redis/data/:/data/  -p 6379:6379  --network host -d  docker.io/redis:4 redis-server /usr/local/etc/redis/redis.conf

六、Docker啟動並配置Logstash2收集多日誌並送至Redis

vim /root/logstash/conf/stdout.conf
input {
    beats {
    port => 5044
    codec => "json"
    }
    beats {
        port => 5045
        codec => "json"
        }
}
output {
    if [type] == "system-log-0019" {
    redis {
        data_type => "list"
        host => "192.168.29.117"
        port => "6379"
        key => "system-log-0019"
        db => "4"
        password => "123456"
            }
    }
    if [type] == "nginx-log-0019" {
    redis {
                data_type => "list"
                host => "192.168.29.117"
                port => "6379"
                key => "nginx-log-0019"
                db => "4"
                password => "123456"
                        }
        }
}

  再次啟動Logstash2:

~]# docker container run --name logstash -p 9600:9600  -v /root/logstash/conf/stdout.conf:/etc/logstash/conf.d/stdout.conf -it  --network host --rm docker.io/logstash:5.6 -f /etc/logstash/conf.d/stdout.conf

  鏈接Redis查看日誌是否已被收集

~]# redis-cli -h 192.168.29.117
192.168.29.117:6379> AUTH 123456
OK
192.168.29.117:6379> SELECT 4 #選擇數據庫號
OK
192.168.29.117:6379[4]> KEYS *
1) "nginx-log-0019" #可以看到日誌已寫入成功,這裏是前面在Logstash中定義的key
2) "system-log-0019"
192.168.29.117:6379[4]> LLEN system-log-0019
(integer) 6400
192.168.29.117:6379[4]> LLEN nginx-log-0019
(integer) 313

  也可以在Redis上查看是否連接:

~]# lsof -n -i:6379
COMMAND     PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
redis-ser 14230 polkitd    6u  IPv4 106595      0t0  TCP *:6379 (LISTEN)
redis-ser 14230 polkitd    7u  IPv4 118204      0t0  TCP 192.168.29.117:6379->192.168.29.104:40320 (ESTABLISHED)
redis-ser 14230 polkitd   10u  IPv4 109238      0t0  TCP 127.0.0.1:6379->127.0.0.1:52460 (ESTABLISHED)
redis-cli 17066    root    3u  IPv4 117806      0t0  TCP 127.0.0.1:52460->127.0.0.1:6379 (ESTABLISHED)

  以上步驟log文件走向:WEB→Filebeat→Logstash2→Redis。下面再來配置Redis→Logstash1→ES→Kibana。

七、docker配置Logstash1

vim /root/logstash/conf/stdout.conf
input {
    redis {
        data_type => "list"
        host => "192.168.29.117"
        port => "6379"
        key => "system-log-0019"
        db => "4"
        password => "123456"
            }
    redis {
                data_type => "list"
                host => "192.168.29.117"
                port => "6379"
                key => "nginx-log-0019"
                db => "4"
                password => "123456"
        codec => "json"
                        }
}
output {
    if [type] == "system-log-0019" { #用if判斷來選擇ES節點
    elasticsearch {
        hosts => ["192.168.29.115:9200"] #節點可以自定義
        index => "system-log-0019-%{+YYYY.MM.dd}"
            }
    }
    if [type] == "nginx-log-0019" {
        elasticsearch {
                hosts => ["192.168.29.115:9200"]
                index => "nginx-log-0019-%{+YYYY.MM.dd}"
                        }
        }
}

  啟動Logstash1並掛載配置:

~]# docker container run --name logstash -p 9600:9600  -v /root/logstash/conf/stdout.conf:/etc/logstash/conf.d/stdout.conf -it  --network host --rm docker.io/logstash:5.6 -f /etc/logstash/conf.d/stdout.conf

  查看Redis中的數據是否已被ES取走:

192.168.29.117:6379[4]> LLEN nginx-log-0019
(integer) 0
192.168.29.117:6379[4]> LLEN system-log-0019
(integer) 0

  以上Redis顯示說明數據已被ES取走。

八、docker啟動並配置Kibana

  配置Kibana:

vim /etc/kibana/kibana.yml
server.host: "127.0.0.1"
elasticsearch.url: "http://192.168.29.115:9200"

  啟動Kibana:

~]# docker container run --name kibana \
-v /etc/kibana/kibana.yml:/etc/kibana/kibana.yml \
--network host \
-p 5601:5601 \
-d --rm kibana:5.6

  為安全起見,這裏建議用Nginx加密返代Kibana進行訪問。

  最後在Kibana中添加index就能訪問並搜索采集的日誌了:

技術分享圖片

技術分享圖片

Docker搭建ElasticSearch+Redis+Logstash+Filebeat日誌分析系統