1. 程式人生 > >ELK 做日誌分析(filebeat+logstash+elasticsearch)配置

ELK 做日誌分析(filebeat+logstash+elasticsearch)配置

imp ati 語法 ike 合並 elk raw ins group

利用 Filebeat去讀取日誌發送到 Logstash ,再由 Logstash 處理後發送給 Elasticsearch 。

一、Filebeat

  1. 項目日誌文件:

利用 Filebeat 去讀取文件,paths 下面配置路徑地址,Filebeat 會自動去讀取 /data/share/business_log/TA-*/debug.log 文件

#=========================== Filebeat prospectors =============================
 
filebeat.prospectors:
 
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
 
- type: log
 
  # Change to true to enable this prospector configuration.
  enabled: true
 
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /usr/local/server/openresty/nginx/logs/*.log
    - /data/share/business_log/TA-*/debug.log
    #- c:\programdata\elasticsearch\logs\*

filebeat 對於多行日誌的處理

multiline:
    pattern: ‘^[0-2][0-9]:[0-5][0-9]:[0-5][0-9]‘
    negate: true
    match: after

上面配置的意思是:不以時間格式開頭的行都合並到上一行的末尾(正則寫的不好,忽略忽略)
pattern:正則表達式
negate:true 或 false;默認是false,匹配pattern的行合並到上一行;true,不匹配pattern的行合並到上一行
match:after 或 before,合並到上一行的末尾或開頭
還有更多兩個配置,默認也是註釋的,沒特殊要求可以不管它
max_lines: 500
timeout: 5s
max_lines:合並最大行,默認500
timeout:一次合並事件的超時時間,默認5s,防止合並消耗太多時間甚至卡死

  1. nginx日誌文件
#=========================== Filebeat prospectors =============================
 
filebeat.prospectors:
 
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
 
- type: log
 
  # Change to true to enable this prospector configuration.
  enabled: true
 
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /usr/local/server/openresty/nginx/logs/access.log
    - /usr/local/server/openresty/nginx/logs/error.log
    #- /data/share/business_log/TA-*/debug.log
    #- c:\programdata\elasticsearch\logs\*
  1. 輸出配置
    註釋掉 Elasticsearch 下面的配置項,並配置 Logstash 下面的配置,會將 Filebeat 讀取到的日誌文件發送到 hosts 裏面配置的 Logstash 服務器上面去
#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["172.18.1.152:5044","172.18.1.153:5044","172.18.1.154:5044"]
  index: "logstash-%{+yyyy.MM.dd}"
 
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
 
  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"
 
  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

Filebeat 啟動命令:nohup ./filebeat -e -c filebeat-TA.yml >/dev/null 2>&1 &

二、Logstash

  1. 基本配置

Logstash 本身不能建立集群,Filebeat 連接 Logstash 後會自動輪詢 Logstash 服務器是否可用,把數據發送到可用的 Logstash 服務器上面去

Logstash 配置,監聽5044端口,接收 Filebeat 發送過來的日誌,然後利用 grok 對日誌過濾,根據不同的日誌設置不同的 type,並將日誌存儲到 Elasticsearch 集群上面

項目日誌跟nginx日誌配置在一起,elasticsearch 配置的索引 index 裏面不能大寫,不然會出現奇怪的bug

input {
  beats {
    port => "5044"
  }
}
 
filter {
 
  date {
      match => ["@timestamp", "yyyy-MM-dd HH:mm:ss"]
  }
  grok {
    match => {
      "source" => "(?<type>([A-Za-z]*-[A-Za-z]*-[A-Za-z]*)|([A-Za-z]*-[A-Za-z]*)|access|error)"
    }
  }
 
}
 
output {
  # 針對不同的項目日誌需要寫不同的判斷項
  if [type] == "MS-System-OTA"{
    elasticsearch {
      hosts => ["172.18.1.152:9200","172.18.1.153:9200","172.18.1.154:9200"]
      index => "logstash-ms-system-ota-%{+YYYY.MM.dd}"
    }
  }else if [type] == "access" or [type] == "error"{
    elasticsearch {
      hosts => ["172.18.1.152:9200","172.18.1.153:9200","172.18.1.154:9200"]
      index => "logstash-nginx-%{+YYYY.MM.dd}"
    }
  }else{
    elasticsearch {
      hosts => ["172.18.1.152:9200","172.18.1.153:9200","172.18.1.154:9200"]
    }
  }
  stdout {
    codec => rubydebug
  }
}
  1. logstash 的 grok-patterns
USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+)))
NUMBER (?:%{BASE10NUM})
BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\b

POSINT \b(?:[1-9][0-9]*)\b
NONNEGINT \b(?:[0-9]+)\b
WORD \b\w+\b
NOTSPACE \S+
SPACE \s*
DATA .*?
GREEDYDATA .*
QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>‘(?>\\.|[^\\‘]+)+‘)|‘‘|(?>`(?>\\.|[^\\`]+)+`)|``))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}

# Networking
MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}\.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)(\.(25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d)){3}))|:)))(%.+)?
IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9])
IP (?:%{IPV6}|%{IPV4})
HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(\.?|\b)
HOST %{HOSTNAME}
IPORHOST (?:%{HOSTNAME}|%{IP})
HOSTPORT %{IPORHOST}:%{POSINT}

# paths
PATH (?:%{UNIXPATH}|%{WINPATH})
UNIXPATH (?>/(?>[\w_%!$@:.,-]+|\\.)*)+
TTY (?:/dev/(pts|tty([pq])?)(\w+)?/?(?:[0-9]+))
WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+
URIPROTO [A-Za-z]+(\+[A-Za-z+]+)?
URIHOST %{IPORHOST}(?::%{POSINT:port})?
# uripath comes loosely from RFC1738, but mostly from what Firefox
# doesn‘t turn into %XX
URIPATH (?:/[A-Za-z0-9$.+!*‘(){},~:;=@#%_\-]*)+
#URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?
URIPARAM \?[A-Za-z0-9$.+!*‘|(){},~@#%&/=:;_?\-\[\]]*
URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
URI %{URIPROTO}://(?:%{USER}(?::[^@]*)?@)?(?:%{URIHOST})?(?:%{URIPATHPARAM})?

# Months: January, Feb, 3, 03, 12, December
MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHNUM2 (?:0[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])

# Days: Monday, Tue, Thu, etc...
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)

# Years?
YEAR (?>\d\d){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
# ‘60‘ is a leap second in most time standards and thus is valid.
SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_RFC2822 %{DAY}, %{MONTHDAY} %{MONTH} %{YEAR} %{TIME} %{ISO8601_TIMEZONE}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
DATESTAMP_EVENTLOG %{YEAR}%{MONTHNUM2}%{MONTHDAY}%{HOUR}%{MINUTE}%{SECOND}

# Syslog Dates: Month Day HH:MM:SS
SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
PROG (?:[\w._/%-]+)
SYSLOGPROG %{PROG:program}(?:\[%{POSINT:pid}\])?
SYSLOGHOST %{IPORHOST}
SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}

# Shortcuts
QS %{QUOTEDSTRING}

# Log formats
SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}

# Log Levels
LOGLEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)
  1. 針對幾個不同的message寫的幾個grok demo 讀取日誌文件
    1. 對於 nginx 的 error.log 的 message 的處理
    # message:   2018/09/18 16:33:51 [error] 15003#0: *545757 no live upstreams while connecting to upstream, client: 39.108.4.83, server: dev-springboot-admin.tvflnet.com, request: "POST /instances HTTP/1.1", upstream: "http://localhost/instances", host: "dev-springboot-admin.tvflnet.com"
    
    filter {
      #定義數據的格式
      grok {
        match => { "message" => "%{DATA:timestamp}\ \[%{DATA:level}\] %{DATA:nginxmessage}\, client: %{DATA:client}\, server: %{DATA:server}\, request: "%{DATA:request}\", upstream: "%{DATA:upstream}\", host: "%{DATA:host}\""}
      }
    }
    1. 對於 nginx 的 error.log 的 message 的處理
    # message:    2018/04/19 20:40:27 [error] 4222#0: *53138 open() "/data/local/project/WebSites/AppOTA/theme/js/frame/layer/skin/default/icon.png" failed (2: No such file or directory), client: 218.17.216.171, server: dev-app-ota.tvflnet.com, request: "GET /theme/js/frame/layer/skin/default/icon.png HTTP/1.1", host: "dev-app-ota.tvflnet.com", referrer: "http://dev-app-ota.tvflnet.com/theme/js/frame/layer/skin/layer.css"
    
    filter {
      #定義數據的格式
      grok {
        match => { "message" => "%{DATA:timestamp}\ \[%{DATA:level}\] %{DATA:nginxmessage}\, client: %{DATA:client}\, server: %{DATA:server}\, request: "%{DATA:request}\", host: "%{DATA:host}\", referrer: "%{DATA:referrer}\""}
      }
    }
    1. 對於 lua 的 error.log 的 message 的處理
    # message:    2018/09/05 18:02:19 [error] 2325#0: *17083157 [lua] PushFinish.lua:38: end push statistics, client: 119.137.53.205, server: dev-system-ota-statistics.tvflnet.com, request: "POST /upgrade/push HTTP/1.1", host: "dev-system-ota-statistics.tvflnet.com"
    
    filter {
      #定義數據的格式
      grok {
        match => { "message" => "%{DATA:timestamp}\ \[%{DATA:level}\] %{DATA:luamessage}\, client: %{DATA:client}\, server: %{DATA:server}\, request: "%{DATA:request}\", host: "%{DATA:host}\""}
      }
    }
    1. 對於 電視端接口日誌的 message 的處理
    # message:    traceid:[Thread:943-sn:sn-mac:mac] 2018-09-18 11:07:03.525 DEBUG com.flnet.utils.web.log.DogLogAspect 55 - Params-參數(JSON):{"backStr":"{\"groupid\":5}","build":201808310938,"ip":"119.147.146.189","mac":"mac","modelCode":"SHARP_0_50#SHARP#IQIYI#LCD_50SUINFCA_H","sn":"sn","version":"modelCode"}
    
    filter {
      #定義數據的格式
      grok {
        match => { "message" => "traceid:%{DATA:traceid}\[Thread:%{DATA:thread}\-sn:%{DATA:sn}\-mac:%{DATA:mac}\]\ %{TIMESTAMP_ISO8601:timestamp}\ %{DATA:level}\ %{GREEDYDATA:message}"}
      }
    }
    1. 對於 項目日誌的 message 的處理
    # message:    traceid:[] 2018-09-14 02:14:48.209 WARN  de.codecentric.boot.admin.client.registration.ApplicationRegistrator 115 - Failed to register application as Application(name=ta-system-ota, managementUrl=http://TV-DEV-API01:10005/actuator, healthUrl=http://TV-DEV-API01:10005/actuator/health, serviceUrl=http://TV-DEV-API01:10005/, metadata={startup=2018-09-10T10:20:41.812+08:00}) at spring-boot-admin ([https://dev-springboot-admin.tvflnet.com/instances]): I/O error on POST request for "https://dev-springboot-admin.tvflnet.com/instances": connect timed out; nested exception is java.net.SocketTimeoutException: connect timed out. Further attempts are logged on DEBUG level
    
    filter {
      #定義數據的格式
      grok {
        match => { "message" => "traceid:\[%{DATA:traceid}\] %{TIMESTAMP_ISO8601:timestamp}\ %{DATA:level}\ %{GREEDYDATA:message}"}
      }
    }
    對於多項 不同的匹配配置多個grok
    Logstash 啟動命令:nohup ./bin/logstash -f ./config/conf.d/logstash-simple.conf >/dev/null 2>&1 &
  2. Logstash 在線驗證地址
    • 國內:http://http://grok.qiexun.net/

    • 國外:http://grokdebug.herokuapp.com/

    • 詳細資料:https://www.cnblogs.com/iiiiher/p/7919149.html

    • grok 語法:https://github.com/kkos/oniguruma/blob/master/doc/RE

ELK 做日誌分析(filebeat+logstash+elasticsearch)配置