1. 程式人生 > >elk日誌上報

elk日誌上報

(一)ELK介紹:

Elasticsearch + Logstash + Kibana(ELK)是一套開源的日誌管理方案
Logstash:負責日誌的收集,處理和儲存
Elasticsearch:負責日誌檢索和分析
Kibana:負責日誌的視覺化

工作流程:

elk

nginx日誌格式分析

log_format  main  '$host $remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent $upstream_response_time "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for" "$uid_got" "$uid_set" "$http_x_tencent_ua" "$upstream_addr"';

解析nginx日誌格式

$host           客戶端請求的時輸入域名或IP
$remote_addr    客戶端的IP
$remote_user    客戶的名稱
[$time_local]   傳送請求時的本地時間
$request        請求的url和使用http協議版本
$status         對於此次請求返回的狀態
$body_bytes_sent  此次請求得到的位元組數(不包含頭部)
$upstream_response_time upstream  響應這次請求的時間
$http_referer          請求的頁面是從哪個頁面跳轉過來的
$http_user_agent       客戶端使用的瀏覽器型別
$http_x_forwarded_for  使用代理後,能夠記錄真實使用者的IP地址
$uid_got   收到的客戶端的cookie標識
$uid_set   傳送給客戶端的cookie標識
$http_x_tencent_ua 
$upstream_addr  真實處理這次請求的主機ip

關於nginx中logstash中grok規則

%{IP:ServerHost} %{IP:agencyip} - - \[(?<localTime>.*)\] \"(?<verb>\w{3,4}) (?<site>.*?(?=\s)) (?<httpprotcol>.*?)\" (?<statuscode>\d{3}) (?<bytes>\d+) (?<responsetime>(\d+|-)) \"(?<referer>.*?)\" \"(?<agent>.*?)\" \"(?<realclientip
>
(-|\d+))\" \"(?<uid_got>(-|\d+))\" \"(?<uid_set>(-|\d+))\" \"(?<tencent_ua>(-|\d+))\" \"(?<upstream_ip>(-|\d+))\"

(二)實現elk收集Nginx日誌

1、filebeat

安裝

#wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.2-linux-x86.tar.gz
# tar xvf filebeat-5.2.2-linux-x86.tar.gz

配置

#vim filebeat.yml
    - input_type: log
      paths:
         - /var/log/nginx/*log
    output.logstash:
          hosts: ["192.168.1.141:5044"]

啟動

# ./filebeat  -e -c filebeat.yml

2、logstash

安裝

#wget https://artifacts.elastic.co/downloads/logstash/logstash-5.2.2.tar.gz
# tar xvf logstash-5.2.2.tar.gz

logstash配置:grok的正則

[[email protected] logstash-5.2.2]# vim patterns/test
#對訪問日誌的匹配

ACCESSLOG %{HOSTNAME:http_host} %{IPORHOST:remote_addr} - (%{USERNAME:user}|-) \[%{HTTPDATE:log_timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{BASE10NUM:http_status} (?:%{BASE10NUM:body_bytes_sent}|-) (%{BASE16FLOAT:upstream_response_time}|-) "(%{DATA:http_referer}|-)" %{QS:client_agent} "(%{DATA:http_x_forwarded_for}|-)" "(%{DATA:uid_got}|-)" %{DATA:uid_set} "(%{DATA:http_x_tencent_ua}|-)" "(?:%{DATA:upstream_add}|-)"

對錯誤日誌的匹配

ERRORLOG %{DATESTAMP:timestamp} \[%{LOGLEVEL:err_level}\] %{GREEDYDATA:err_mess}

配置檔案

# cat conf.d/nginx.conf 
input {
    beats {
        port => 5044 
    }
}

filter {
           if ([source] =~ "access")
           {
            grok {
              patterns_dir => "/usr/local/logstash-5.2.2/patterns/"
              match => {"message" => "%{ACCESSLOG}"}
             }

           date {
                  match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
                 }
           }

           if ([source] =~ "error" )
           { 
             grok{
              patterns_dir => "/usr/local/logstash-5.2.2/patterns/"
              match => {"message" => "%{ERRORLOG}"}
             }

            date {
                match => ["log_timestamp","yy/MM/dd HH:mm:ss"]
               } 
           }
}

output {
     elasticsearch { hosts => ["192.168.1.134:9200"] }
     stdout { codec=>rubydebug}  #輸出到螢幕
}

啟動logstash:

# bin/logstash -f conf.d/mytest.conf --config.test_and_exit
# bin/logstash -f conf.d/nginx.conf 

3、ElasticSearch

安裝

#wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.2.zip
unzip elasticsearch-5.2.2.zip
cd elasticsearch-5.2.2/ 

elasticsearch配置

[[email protected] elasticsearch-5.2.2]$ cat config/elasticsearch.yml |grep -Ev "^#|^$" 
network.host: 192.168.1.134
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.134"]
discovery.zen.minimum_master_nodes: 1
bootstrap.system_call_filter: false

$ cat config/jvm.options |grep -Ev "^#|^$" 
-Xmx128M
-Xms128M
.......(預設配置)

es切換到普通使用者啟動

#useradd es -p 123456
#su - es
$bin/elasticsearch

4、kibana

#kibana-5.2.2-windows-x86.zip
#解壓後進入目錄編輯配置檔案
server.port: 5601
server.host: "192.168.3.57"
elasticsearch.url: "http://192.168.1.134:9200"
啟動 
#bin\kibana.bat
驗證
瀏覽器輸入: 192.168.3.57:5601
可以檢視到收集的nginx日誌資訊

(三)elk安裝報錯與解決

1、logstash測試報錯

[2017-03-20T14:53:41,781][ERROR][logstash.agent           ] Cannot load an invalid configuration {:reason=>"Expected one of #, input, filter, output at line 1, column 19 (byte 19) after "}

解決:

 bin/logstash -e 'input { stdin { } } output { stdout {codec=>rubydebug} }'

2、啟動elasticsearch失敗

[[email protected] elasticsearch-5.2.2]# bin/elasticsearch
Error: encountered environment variables that are no longer supported
Use jvm.options or ES_JAVA_OPTS to configure the JVM
ES_HEAP_SIZE=512m: set -Xms512m and -Xmx512m in jvm.options or add "-Xms512m -Xmx512m" to ES_JAVA_OPTS

解決jvm:

$ vim config/jvm.options 
-Xmx128M
-Xms128M
#sh -x bin/elasticsearch

3、啟動elasticsearch失敗

# ./bin/elasticsearch
[2017-03-21T09:49:15,924][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:125) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:112) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.cli.SettingCommand.execute(SettingCommand.java:54) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:122) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.cli.Command.main(Command.java:88) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:89) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:82) ~[elasticsearch-5.2.2.jar:5.2.2]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root

解決:

#useradd es -p 123456
#su - es
$ ./bin/elasticsearch

4、修改network=IP報錯

ERROR: bootstrap checks failed
initial heap size [130023424] not equal to maximum heap size [134217728]; this can cause resize pauses and prevents mlockall from locking the entire heap
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

解決:

[[email protected] elasticsearch-5.2.2]# sysctl -w 
vm.max_map_count=262144
vm.max_map_count = 262144
[[email protected] elasticsearch-5.2.2]# sysctl -a |grep vm.max_map_count
vm.max_map_count = 262144

# tail -1  /etc/sysctl.conf
vm.max_map_count=262144
# sysctl -p

5、bookstrap checks failed失敗

ERROR: bootstrap checks failed
max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

解決:

# tail -2 /etc/security/limits.conf
es soft nofile 65536
es hard nofile 65536

6、bookstrap checks failed失敗

[FMOoYUT] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
ERROR: bootstrap checks failed
system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

解決:

# tail -1 config/elasticsearch.yml
 bootstrap.system_call_filter: false