快速搭建ELK集中化日誌管理平臺
由於我們的項目是分布式,服務分布於多個服務器上,每次查看日誌都要登錄不同服務器查看,而且查看起來還比較麻煩,老大讓搭一個集中化日誌管理的東西,然後就在網上找到了這個東西ELK
ELK就是elastic公司下的三款產品,elasticsearch,logstash,kibana
官網:https://www.elastic.co/cn/products
1.我先解釋一下這三個產品的功能
1: elasticsearch (ELK核心)
這是一個Lucene的分布式全搜索框架,可以對日誌進行分布式存儲
2:logstash
它的功能是分布於各個服務器上做日誌的收集,比如192.168.1.45上配置了logstash那麽他會自動收集你該服務器上的日誌傳輸給elasticsearch
3: kibana
它可以把elasticsearch中的數據進行報表形式的展示
知道了這些以後我們就已經大體了解到ELK的大體工作流程了
2.快速搭建(在這裏我全都搭建在一臺服務器上了)
從上圖中我們都能看出elasticsearch是核心的一個東西,我們先從它這裏開始配置
1.elasticsearch配置
elasticsearch是ELK的核心,而且一定要註意他不能使用root賬戶啟動,所以我們直接新創建一個賬戶來配置elasticsearch,
首先用useradd 命令創建一個賬戶 比如我創建的賬戶是elsearch
直接用elsearch賬戶登錄,然後去官網下載wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.1.tar.gz
1 1 [elsearch@9378y7s1qz ~]$ tar zxf elasticsearch-6.4.0.tar.gz 2 2 [elsearch@9378y7s1qz ~]$ cd elasticsearch-6.4.0/conf
接下來需要修改elasticsearch.yml
1 network.host: 0.0.0.0 2 http.port: 9200
這個如果不修改,外網是沒法訪問的
接下來就是啟動了到bin目錄下運行 ./elasticsearch
這個時候你就會發現啟動不了
提示最大此用戶的最大能使用的線程數太小,最少需要4096這個量級才能啟動,可是這玩意在哪修改呢? 每個用戶能打開的文件句柄數也和允許此用戶能打開的線程進程數有關,Linux上一切皆文件嘛,線程、進程也是以文件句柄書相關的方式控制的 ulimit -n可以查看當前用戶能同時使用的文件句柄數限制,當然這玩意我們可以配置,但是我們配置的屬於軟件限制,每個電腦都有其極限,這個極限基於硬件,如果硬件限制1024,那麽我們軟件調整到65535也是無濟於事的 上網查了查 需要 vim /etc/security/limits.conf 添加或修改
1 * soft nofile 65536 2 3 * hard nofile 131072 4 5 * soft nproc 1024 6 7 * hard nproc 4096
提示1024不足,而文件中只有一個1024:* soft nproc 1024,我們按照要求將其更改為2048,然後重新嘗試一下
還是沒法啟動,這個時候會
ERROR: [1] bootstrap checks failed [1]:max virtual memory areas vm.max_map_count [1024] is too low, increase to at least [262144]
這個錯誤需要vim /etc/sysctl.conf添加
vm.max_map_count=262144
然後sysctl -p 一下即可
我們再次啟動,這次就正常了,到這裏我們的elasticsearch已經配置好了,接下來我們開始配置logstash
2.logStash配置
還是跟之前一樣先下載logstash
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.1.tar.gz
然後解壓
配置的時候我們需要在config下創建一個配置文件logstash.conf
然後做好input ,filter,output三大塊, 其中input是吸取logs文件下的所有log後綴的日誌文件,filter是一個過濾函數,這裏不用配置,output配置了導入到
hosts為127.0.0.1:9200的elasticsearch中,每天一個索引。
1 input { 2 file { 3 type => "log" 4 path => "/usr/local/logs/*.log" #這裏是你需要收集的日誌 5 start_position => "beginning" 6 } 7 } 8 9 output { 10 stdout { 11 codec => rubydebug { } 12 } 13 14 elasticsearch { 15 hosts => "127.0.0.1" #elasticsearch地址 16 index => "log-%{+YYYY.MM.dd}" 17 } 18 }
這些配置完成之後就能到/bin下找到logstash 啟動腳本了
在啟動的時候需要引用剛才我們創建的配置文件
1 [root@iZwz95t3hfncu7anavrafmZ bin]# ./logstash -f ../config/logstash.conf >lostash.log 2 Sending Logstash logs to /usr/local/logstash/logs which is now configured via log4j2.properties 3 [2018-09-19T21:37:07,725][WARN ][logstash.config.source.multilocal] Ignoring the ‘pipelines.yml‘ file because modules or command line options are specified 4 [2018-09-19T21:37:08,899][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.0"} 5 [2018-09-19T21:37:13,749][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} 6 [2018-09-19T21:37:14,763][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://0.0.0.0:9200/]}} 7 [2018-09-19T21:37:14,783][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://0.0.0.0:9200/, :path=>"/"} 8 [2018-09-19T21:37:15,125][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://47.107.75.26:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"} 9 [2018-09-19T21:37:15,194][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//47.107.75.26:9200"]} 10 [2018-09-19T21:37:15,756][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/local/logstash/data/plugins/inputs/file/.sincedb_71ed980b1f25dc3be65a3d965d78b265", :path=>["/usr/local/logs/*.log"]} 11 [2018-09-19T21:37:15,849][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x123f661f run>"} 12 [2018-09-19T21:37:15,956][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} 13 [2018-09-19T21:37:15,986][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections 14 [2018-09-19T21:37:16,610][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} 15
在日誌中大概可以看到開啟了9200,9300端口。現在logstash已經啟動了
3.kibana配置
elasticsearch和logstash已經配置好了,那麽我們需要展示就需要kibana了,接下來我們來配置kibana
還是先下載kibana,
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.1-linux-x86_64.tar.gz
然後解壓,這個依然需要我們在config文件下創建配置文件
1 [root@slave1 config]# vim kibana.yml 2 3 elasticsearch.url: "http://localhost:9200" 4 server.host: 0.0.0.0
然後就是啟動bin下的kibana
1 [root@slave1 bin]# ./kibana 2 log [01:23:27.650] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready 3 log [01:23:27.748] [info][status][plugin:[email protected]] Status changed from uninitialized to yellow - Waiting for Elasticsearch 4 log [01:23:27.786] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready 5 log [01:23:27.794] [warning] You‘re running Kibana 5.2.0 with some different versions of Elasticsearch. Update Kibana or Elasticsearch to the same version to prevent compatibility issues: v5.6.4 @ 192.168.23.151:9200 (192.168.23.151) 6 log [01:23:27.811] [info][status][plugin:[email protected]] Status changed from yellow to green - Kibana index ready 7 log [01:23:28.250] [info][status][plugin:[email protected]] Status changed from uninitialized to green - Ready 8 log [01:23:28.255] [info][listening] Server running at http://0.0.0.0:5601 9 log [01:23:28.259] [info][status][ui settings] Status changed from uninitialized to green – Ready
4.現在我們已經全部搭建好了
剩下的就是訪問了,來看看我的頁面
OK,我們現在ELK已經成功搭建了
快速搭建ELK集中化日誌管理平臺