1. 程式人生 > >從0開始搭建ELK及採集日誌的簡單應用

從0開始搭建ELK及採集日誌的簡單應用

關於ELK的理論介紹、架構圖解說,很多部落格都有很詳細的講解可以參考。本文主要記錄了elk的簡單搭建和簡單應用。

安裝前準備

1、環境說明:

IP 主機名 部署服務
10.0.0.101(centos7) test101 jdk、elasticsearch、logstash、kibana及filebeat(filebeat用於測試採集test101伺服器自身messages日誌)
10.0.0.102(centos7) test102 nginx及filebeat(filebeat用於測試採集test102伺服器nginx日誌)

2、安裝包準備:
jdk-8u151-linux-x64.tar.gz
elasticsearch-6.4.2.tar.gz
kibana-6.4.2-linux-x86_64.tar.gz
logstash-6.4.2.tar.gz
elk官網下載地址:https://www.elastic.co/cn/downloads

部署ELK工具服務端

先在test101主機部署jdk、elasticsearch、logstash、kibana,部署好elk的服務端。把上面的四個安裝包上傳到test101伺服器的/root下面。

1、部署jdk

# tar xf jdk-8u151-linux-x64.tar.gz -C /usr/local/
# echo -e "export JAVA_HOME=/usr/local/jdk1.8.0_151\n export JRE_HOME=\${JAVA_HOME}/jre\n export CLASSPATH=.:\${JAVA_HOME}/lib:\${JRE_HOME}/lib\n export  PATH=\${JAVA_HOME}/bin:\$PATH" >>/etc/profile
# source /etc/profile
# java -version   #或者執行jps命令也OK

備註:要是一不小心改壞了/etc/profile,可以參考博文:《/etc/profile檔案改壞了,所有命令無法執行了怎麼辦?》

2、建立elk專用使用者

elk使用者用於啟動elasticsearch,和後面採集日誌的時候,配置在filebeat配置檔案裡面。

# useradd elk;echo 12345678|passwd elk --stdin    #建立elk使用者,密碼設定為12345678

3、部署elasticsearch

3.1 解壓安裝包:

# tar xf elasticsearch-6.4.2.tar.gz -C /usr/local/

3.2 修改配置檔案/usr/local/elasticsearch-6.4.2/config/elasticsearch.yml,修改如下:

[[email protected] config]# egrep -v "^#|^$" /usr/local/elasticsearch-6.4.2/config/elasticsearch.yml
cluster.name: elk
node.name: node-1
path.data: /opt/elk/es_data
path.logs: /opt/elk/es_logs
network.host: 10.0.0.101
http.port: 9200
discovery.zen.ping.unicast.hosts: ["10.0.0.101:9300"]
discovery.zen.minimum_master_nodes: 1
[[email protected] config]# 

3.3 修改配置檔案/etc/security/limits.conf和/etc/sysctl.conf如下:

# echo -e "* soft nofile 65536\n* hard nofile 131072\n* soft nproc 2048\n* hard nproc 4096\n" >>/etc/security/limits.conf
# echo "vm.max_map_count=655360" >>/etc/sysctl.conf
# sysctl -p

3.4 建立data和log目錄並授權給elk使用者:

# mkdir /opt/elk/{es_data,es_logs} -p
# chown elk:elk -R /opt/elk/
# chown elk:elk -R /usr/local/elasticsearch-6.4.2/

3.5 啟動elasticsearch:

# cd /usr/local/elasticsearch-6.4.2/bin/
# su elk
$ nohup /usr/local/elasticsearch-6.4.2/bin/elasticsearch >/dev/null 2>&1 &

3.6 檢查程序和埠:

[[email protected] ~]# ss -ntlup| grep -E "9200|9300"
tcp    LISTEN     0      128       ::ffff:10.0.0.101:9200                 :::*                   users:(("java",pid=6001,fd=193))
tcp    LISTEN     0      128       ::ffff:10.0.0.101:9300                 :::*                   users:(("java",pid=6001,fd=186))
[[email protected] ~]# 

備註:
如果萬一遇到elasticsearch服務起不來,可以排查一下es目錄的許可權、伺服器記憶體什麼的:《總結—elasticsearch啟動失敗的幾種情況及解決》

4、部署logstash

4.1 解壓安裝包:

# tar xf logstash-6.4.2.tar.gz -C /usr/local/

4.2 修改配置檔案/usr/local/logstash-6.4.2/config/logstash.yml,修改如下:

[[email protected] logstash-6.4.2]# egrep -v "^#|^$" /usr/local/logstash-6.4.2/config/logstash.yml
 path.data: /opt/elk/logstash_data
 http.host: "10.0.0.101"
 path.logs: /opt/elk/logstash_logs
 path.config: /usr/local/logstash-6.4.2/conf.d    #這一行配置檔案沒有的,自己加到檔案末尾就好了
[[email protected] logstash-6.4.2]# 

4.3 建立conf.d,新增日誌處理檔案syslog.conf:

[[email protected] conf.d]# mkdir /usr/local/logstash-6.4.2/conf.d
[[email protected] conf.d]# cat /usr/local/logstash-6.4.2/conf.d/syslog.conf 
input {
 #filebeat客戶端
  beats {
     port => 5044
  }

}

 #篩選
 #filter { }

output {
#標準輸出,除錯使用
  stdout {
   codec => rubydebug { }
  }

# 輸出到es
  elasticsearch {
    hosts => ["http://10.0.0.101:9200"]
    index => "%{type}-%{+YYYY.MM.dd}"
  }

}
[[email protected] conf.d]# 

4.4 建立建立data和log目錄並授權給elk使用者:

# mkdir /opt/elk/{logstash_data,logstash_logs} -p
# chown -R elk:elk /opt/elk/
# chown -R elk:elk /usr/local/logstash-6.4.2/

4.5 除錯啟動服務:

[[email protected] conf.d]# /usr/local/logstash-6.4.2/bin/logstash -f /usr/local/logstash-6.4.2/conf.d/syslog.conf --config.test_and_exit   #這一步可能需要等待一會兒才會有反應
Sending Logstash logs to /opt/elk/logstash_logs which is now configured via log4j2.properties
[2018-11-01T09:49:14,299][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/elk/logstash_data/queue"}
[2018-11-01T09:49:14,352][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/elk/logstash_data/dead_letter_queue"}
[2018-11-01T09:49:16,547][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[2018-11-01T09:49:26,510][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[[email protected] conf.d]# 

4.6 正式啟動服務:

# nohup /usr/local/logstash-6.4.2/bin/logstash -f /usr/local/logstash-6.4.2/conf.d/syslog.conf >/dev/null 2>&1 &    #後臺啟動

4.7 檢視程序和埠:

[[email protected] local]# ps -ef|grep logstash
root       6325    926 17 10:08 pts/0    00:01:55 /usr/local/jdk1.8.0_151/bin/java -Xms1g -Xmx1g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -cp /usr/local/logstash-6.4.2/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/commons-compiler-3.0.8.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/guava-22.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-annotations-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-core-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-databind-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.5.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/janino-3.0.8.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jruby-complete-9.1.13.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/log4j-api-2.9.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/log4j-core-2.9.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/log4j-slf4j-impl-2.9.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/logstash-core.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/locallogstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/logstash-6.4.2/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f /usr/local/logstash-6.4.2/conf.d/syslog.conf
root       6430    926  0 10:19 pts/0    00:00:00 grep --color=auto logstash

[[email protected] local]# netstat -tlunp|grep 6325
tcp6       0      0 :::5044                 :::*                    LISTEN      6325/java           
tcp6       0      0 10.0.0.101:9600         :::*                    LISTEN      6325/java           
[[email protected] local]# 

5、部署kibana

5.1 解壓安裝包:

# tar xf kibana-6.4.2-linux-x86_64.tar.gz  -C /usr/local/

5.2 修改配置檔案/usr/local/kibana-6.4.2-linux-x86_64/config/kibana.yml,修改如下:

[[email protected] ~]# egrep -v "^#|^$" /usr/local/kibana-6.4.2-linux-x86_64/config/kibana.yml 
server.port: 5601
server.host: "10.0.0.101"
elasticsearch.url: "http://10.0.0.101:9200"
kibana.index: ".kibana"
[[email protected] ~]# 

5.3 修改kibana目錄屬主為elk:

# chown elk:elk -R /usr/local/kibana-6.4.2-linux-x86_64/

5.4 啟動kibana:

# nohup  /usr/local/kibana-6.4.2-linux-x86_64/bin/kibana >/dev/null 2>&1 &

5.5 檢視程序和埠:

[[email protected] local]# ps -ef|grep kibana
root       6381    926 28 10:16 pts/0    00:00:53 /usr/local/kibana-6.4.2-linux-x86_64/bin/../node/bin/node --no-warnings /usr/local/kibana-6.4.2-linux-x86_64/bin/../src/cli
root       6432    926  0 10:19 pts/0    00:00:00 grep --color=auto kibana
[[email protected] local]# netstat -tlunp|grep 6381
tcp        0      0 10.0.0.101:5601         0.0.0.0:*               LISTEN      6381/node           
[[email protected] local]# 

5.6 http://10.0.0.101:5601 訪問kibana介面:
從0開始搭建ELK及採集日誌的簡單應用

至此,整個elk工具的服務端搭建完畢。

ELK採集日誌應用

服務端部署好之後,就是配置日誌採集了,這時候就需要用到filebeat了

應用一:採集ELK本機(test101)的messages日誌和secure日誌

1、在kibana主頁介面,點選“Add log data” :
從0開始搭建ELK及採集日誌的簡單應用

2、選擇system log:
從0開始搭建ELK及採集日誌的簡單應用

3、選擇RPM,這裡有新增日誌的步驟(但是步驟有個小坑,可以參考如下的配置步驟:):
3.1 在test101伺服器的es下面安裝外掛:

# cd /usr/local/elasticsearch-6.4.2/bin/
# ./elasticsearch-plugin install ingest-geoip

3.2 在test101伺服器下載並安裝filebeat:

# curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-x86_64.rpm
# rpm -vi filebeat-6.4.2-x86_64.rpm

3.3 在test101伺服器配置filebeat,修改/etc/filebeat/filebeat.yml下面幾個地方:

#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
  # Change to true to enable this input configuration.
  enabled: true   #注意:這裡預設是false,在kibana介面上沒有提到要修改,但是不改成true,kibana介面就看不到日誌內容
  paths:      #配置要採集的日誌,這裡我採集了messages日誌和secure日誌
    - /var/log/messages*
    - /var/log/secure*
#============================== Kibana =====================================
setup.kibana:
  host: "10.0.0.101:5601"
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  hosts: ["10.0.0.101:9200"]
  username: "elk"
  password: "12345678"

3.4 在test101伺服器執行如下命令修改 /etc/filebeat/modules.d/system.yml:

# filebeat modules enable system

3.5 在test101伺服器啟動filebeat:

# filebeat setup
# service filebeat start

3.6 然後回到kibana的Discover介面,搜尋關鍵字messages和secure,就能看到相關的日誌了:
從0開始搭建ELK及採集日誌的簡單應用
從0開始搭建ELK及採集日誌的簡單應用

應用二:採集10.0.0.102(test102)伺服器的nginx日誌

在應用一,我們採集了elk本身伺服器的日誌,現在再採集一下test102的日誌

1、在nginx上安裝一個nginx:

# yum -y install nginx

從0開始搭建ELK及採集日誌的簡單應用

2、跟應用一一樣,在kibana的首頁,點選“Add log data”,然後選擇nginx logs,找到安裝步驟:
從0開始搭建ELK及採集日誌的簡單應用

3、選擇RPM,這裡有新增日誌的步驟(可以參考如下的配置步驟:):
3.1 在test101伺服器的es下面安裝外掛:

# cd /usr/local/elasticsearch-6.4.2/bin/
# ./elasticsearch-plugin install ingest-geoip    #這個在應用一裝過了,可以省略
# ./elasticsearch-plugin install ingest-user-agent

=======以下都在10.0.0.102(test102)伺服器進行=======
3.2 在test102伺服器下載並安裝filebeat:

# curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-x86_64.rpm
# rpm -vi filebeat-6.4.2-x86_64.rpm

3.3 在test102伺服器配置filebeat,修改/etc/filebeat/filebeat.yml下面幾個地方:

#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
  # Change to true to enable this input configuration.
  enabled: true   #注意:這裡預設是false,在kibana介面上沒有提到要修改,但是不改成true,kibana介面就看不到日誌內容
  paths:      #配置要採集的日誌,這裡我採集了/var/log/nginx/目錄下的所有日誌檔案,包括access.log和error.log,就用了*
    - /var/log/nginx/*
#============================== Kibana =====================================
setup.kibana:
  host: "10.0.0.101:5601"
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  hosts: ["10.0.0.101:9200"]
  username: "elk"
  password: "12345678"

3.4 在test102伺服器執行如下命令修改/etc/filebeat/modules.d/nginx.yml:

# filebeat modules enable nginx

執行之後,看到檔案寫入瞭如下的內容:

[[email protected] ~]# cat /etc/filebeat/modules.d/nginx.yml
- module: nginx
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths:
[[email protected] ~]# 

3.5 在test102伺服器啟動filebeat:

# filebeat setup
# service filebeat start

3.6 然後回到kibana的Discover介面,就能看到相關的日誌了:
從0開始搭建ELK及採集日誌的簡單應用

備註:
有些文章安裝了elasticsearch-head外掛,本文沒有安裝