1. 程式人生 > >ELK的安裝配置使用

ELK的安裝配置使用

redis kibana elk

ELK的安裝配置

一、ES集群的安裝:

搭建ElasticSearch集群: 使用三臺服務器搭建集群

node-1(主節點) 10.170.13.1

node-2(從節點) 10.116.35.133

node-3(從節點) 10.44.79.57

下載安裝包 地址:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.4.3.rpm 在三臺服務器上分別下載安裝elasticsearch-5.4.3.rpm

安裝:
	~]# yum install elasticsearch-5.4.3.rpm -y
修改node-1配置文件:
	~]# vim /etc/elasticsearch/elasticsearch.yml
	cluster.name: elasticsearch
	node.name: node-1
	
	network.host: 0.0.0.0
	http.port: 9200
	
	http.cors.enabled: true
	http.cors.allow-origin: "*"
	
	node.master: true
	node.data: true
	discovery.zen.ping.unicast.hosts: ["10.170.13.1"]
修改node-2配置文件:
	~]# vim /etc/elasticsearch/elasticsearch.yml
	cluster.name: elasticsearch
	node.name: node-2
	
	network.host: 0.0.0.0
	http.port: 9200
	
	http.cors.enabled: true
	http.cors.allow-origin: "*"
	
	node.master: false
	node.data: true
	discovery.zen.ping.unicast.hosts: ["10.170.13.1"]
修改node-3配置文件:
	~]# vim /etc/elasticsearch/elasticsearch.yml
	cluster.name: elasticsearch
	node.name: node-3
	
	network.host: 0.0.0.0
	http.port: 9200
	
	http.cors.enabled: true
	http.cors.allow-origin: "*"
	
	node.master: false
	node.data: true
	discovery.zen.ping.unicast.hosts: ["10.170.13.1"]

配置完以後先啟動node-1(master)然後再去啟動node-2和node-3;
啟動:
	~]# service elasticsearch start        安裝5.X的版本啟動會有很多錯誤可查看日誌,根據自身處理錯誤。
	~]# tail -f /var/log/elasticsearch/elasticsearch.log
錯誤匯總<部分問題來源於網絡,感謝大家的之後,在此匯總一下>:
問題一:
[2016-11-06T16:27:21,712][WARN ][o.e.b.JNANatives ] unable to install syscall filter:
Java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMPandCONFIG_SECCOMP_FILTERcompiledinatorg.elasticsearch.bootstrap.Seccomp.linuxImpl(Seccomp.java:349) ~[elasticsearch-5.0.0.jar:5.0.0]
at org.elasticsearch.bootstrap.Seccomp.init(Seccomp.java:630) ~[elasticsearch-5.0.0.jar:5.0.0]
原因:只是一個警告,主要是因為Linux版本過低造成的。
解決方案:1、重新安裝新版本的Linux系統 2、警告不影響使用,可以忽略
 
問題二:
ERROR: bootstrap checks failed
max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]
原因:無法創建本地文件問題,用戶最大可創建文件數太小
 
解決方案:
切換到root用戶,編輯limits.conf配置文件, 添加類似如下內容:
vi /etc/security/limits.conf
添加如下內容:
*  soft nofile 65536* hard nofile 131072* soft nproc 2048* hard nproc 4096備註:* 代表Linux所有用戶名稱(比如 hadoop)
保存、退出、重新登錄才可生效
 
問題三:
max number of threads [1024] for user [es] likely too low, increase to at least [2048]
原因:無法創建本地線程問題,用戶最大可創建線程數太小
解決方案:切換到root用戶,進入limits.d目錄下,修改90-nproc.conf 配置文件。
vi /etc/security/limits.d/90-nproc.conf
找到如下內容:
* soft nproc 1024#修改為* soft nproc 2048
 問題四:
max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
原因:最大虛擬內存太小
解決方案:切換到root用戶下,修改配置文件sysctl.conf
vi /etc/sysctl.conf
添加下面配置:
vm.max_map_count=655360並執行命令:
sysctl -p
然後重新啟動elasticsearch,即可啟動成功。
 
問題五:
ElasticSearch啟動找不到主機或路由
原因:ElasticSearch 單播配置有問題
解決方案:
檢查ElasticSearch中的配置文件
vi  config/elasticsearch.yml
找到如下配置:
discovery.zen.ping.unicast.hosts: ["10.170.13.1"]  
一般情況下,是這裏配置有問題,註意書寫格式
 
問題六:
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
原因:ElasticSearch節點之間的jdk版本不一致
解決方案:ElasticSearch集群統一jdk環境
 
問題七:
Unsupported major.minor version 52.0原因:jdk版本問題太低
解決方案:更換jdk版本,ElasticSearch5.0.0支持jdk1.8.0
 問題八:
bin/elasticsearch-plugin install license
ERROR: Unknown plugin license
 
原因:ElasticSearch5.0.0以後插件命令已經改變
解決方案:使用最新命令安裝所有插件
bin/elasticsearch-plugin install x-pack
問題九:
啟動異常:ERROR: bootstrap checks failed
system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
問題原因:因為Centos6不支持SecComp,而ES5.2.1默認bootstrap.system_call_filter為true進行檢測,所以導致檢測失敗,失敗後直接導致ES不能啟動。詳見 :https://github.com/elastic/elasticsearch/issues/22899解決方法:在elasticsearch.yml中配置bootstrap.system_call_filter為false,註意要在Memory下面:
bootstrap.memory_lock: falsebootstrap.system_call_filter: false

啟動成功以後在node-1上日誌會顯示node-2、node-3加入集群:

[2017-07-05T10:49:30,988][INFO ][o.e.c.s.ClusterService   ] [node-1] added {{node-2}{9P1tDYlaTTCTLvgf56qiTg}{tDWHLBA5QVKJVigNeDx-yw}{10.116.35.133}{10.116.35.133:9300},}, reason: zen-disco-node-join[{node-2}{9P1tDYlaTTCTLvgf56qiTg}{tDWHLBA5QVKJVigNeDx-yw}{10.116.35.133}{10.116.35.133:9300}]

 
[2017-07-05T10:49:36,927][INFO ][o.e.c.s.ClusterService   ] [node-1] added {{node-3}{seEWVcyKRnupt6eP2T3-Qg}{W5RrwtY2ToWxuzWFsFdPyA}{10.44.79.57}{10.44.79.57:9300},}, reason: zen-disco-node-join[{node-3}{seEWVcyKRnupt6eP2T3-Qg}{W5RrwtY2ToWxuzWFsFdPyA}{10.44.79.57}{10.44.79.57 :9300}]

打開瀏覽器訪問測試:

到此ElasticSearch安裝成功,下面安裝管理工具ElasticSearch-head 註意:ElasticSearch5.X以後的版本差異較大不支持以前2.X那種安裝方式

安裝nodejs:
 下載地址:
	~]# wget https://nodejs.org/dist/v8.1.3/node-v8.1.3-linux-x64.tar.gz
	~]# tar xf node-v8.1.3-linux-x64.tar.gz -C /usr/local/
	~]# mv /usr/local/ node-v8.1.3-linux-x64 node.js
	~]# echo ‘export "PATH=/usr/local/node.js/bin:$PATH"‘ > /etc/profile.d/nodejs.sh
	~]# source /etc/profile.d/nodejs.sh
	~]# node -vv8.1.3~]# npm -v5.0.3	下載elasticsearch-head:
	 下載安裝:
~]# cd /usr/local/
~]# git clone git://github.com/mobz/elasticsearch-head.git
~]# cd /usr/local/elasticsearch-head/
elasticsearch-head]# npm install
elasticsearch-head]# npm install –g grunt–cli
elasticsearch-head]# npm install -g cnpm --registry=
 修改配置文件:
elasticsearch-head]# vim Gruntfile.jsconnect: {
        server: {
            options: {
                hostname: ‘0.0.0.0‘,
                port: 9100,                base: ‘.‘,
                keepalive: true
            }
        }
    }

elasticsearch-head]# vim _site/app.js
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://10.170.13.1:9200";

啟動:
elasticsearch-head]# service elasticsearch stop
elasticsearch-head]# service elasticsearch start
elasticsearch-head]# grunt server

二、安裝logstash,配置監控rsyslog、nginx、es的日誌

安裝包下載:https://www.elastic.co/downloads/past-releases

可根據不同的版本自行下載,ELK更新很快。 我的ES安裝的5.4.3的版本所以統一安裝5.4.3的版本。

~]# wgethttps://artifacts.elastic.co/downloads/logstash/logstash-5.4.3.rpm #直接下載rpm包

分別在ES集群的三臺機器上都下載安裝好 ~]# yum install logstash-5.4.3.rpm -y

(logstash就算直接用rpm安裝也需要配置環境變量,目錄:/usr/share/logstash/bin/)

配置環境變量:

~]# echo ‘export "PATH=/usr/share/logstash/bin:$PATH"‘ > /etc/profile.d/logstash.sh

~]# source /etc/profile.d/logstash.sh

logstash的配置相對麻煩一下,因為logstash需要接受輸入,進行處理然後產生輸出。logstash采用input,filter,output的三段配置法。input配置輸入源,filter配置對輸入源中的信息怎樣進行處理,而output配置輸出位置。

~]# vim /etc/logstash/conf.d/all.conf  在配置文件裏加入配置
    input { 
        file {  
                path => "/var/log/messages"		#日誌文件路徑
                type => "system"
                start_position => "beginning"
        }        
        syslog {
                type => "system-syslog"			#定義type,為後面輸出做匹配
                host => "10.170.13.1"
                port => "514"
        }        
        file {  
                path => "/var/log/elasticsearch/elasticsearch.log"
                type => "es-error"
                start_position => "beginning"
                 codec => multiline {			#multiline可定義多行為一個事件
                    pattern => "^\["
                    negate => "true"
                    what => "previous"
                 }
        
        }        
        file {  
                path => "/var/log/nginx/access-json.log"
                codec => "json"					#以json輸出
                type => "nginx_access"
                start_position => "beginning"  #從日誌開頭記錄(默認是從日誌尾部記錄的)
        }
 
    }

    output {    if [type] == "system" {						#if 匹配type,則執行
            elasticsearch {							#定義ES
                    hosts => ["10.170.13.1:9200"]		#ES主機:端口
                    index => "system-%{+YYYY.MM.dd}"	#定義索引名稱
            }
        }
    
        if [type] == "system-syslog" {        elasticsearch {
                    hosts => ["10.170.13.1:9200"]
                    index => "system-syslog-%{+YYYY.MM.dd}"
            }
        }
    
        if [type] == "es-error" {        elasticsearch {
                    hosts => ["10.170.13.1:9200"]
                    index => "es-error-%{+YYYY.MM.dd}"
            }
    
        }
    
            if [type] == "nginx_access" {            elasticsearch {
                        hosts => ["10.170.13.1:9200"]
                        index => "nginx_access-%{+YYYY.MM.dd}"
                }
            }
    }

設置系統日誌

~]# vim /etc/rsyslog.conf
    *.* @@10.170.13.1:514  #在最後面添加這行,向某個IP:端口發送系統日誌

編輯nginx訪問日誌為json格式;

~]# vim /etc/nginx/nginx.conf
    log_format logstash_json ‘{ "@timestamp": "$time_local", ‘
                             ‘"@fields": { ‘
                             ‘"remote_addr": "$remote_addr", ‘
                             ‘"remote_user": "$remote_user", ‘
                             ‘"body_bytes_sent": "$body_bytes_sent", ‘
                             ‘"request_time": "$request_time", ‘
                             ‘"status": "$status", ‘
                             ‘"request": "$request", ‘
                             ‘"request_method": "$request_method", ‘
                             ‘"http_referrer": "$http_referer", ‘
                             ‘"body_bytes_sent":"$body_bytes_sent", ‘
                             ‘"http_x_forwarded_for": "$http_x_forwarded_for", ‘
                             ‘"http_user_agent": "$http_user_agent" } }‘;
    
    access_log  /var/log/nginx/access-json.log  logstash_json;

三、安裝Kibana

安裝包下載:https://www.elastic.co/downloads/past-releases

~]# wget 
~]# yum install kibana-5.4.3-x86_64.rpm -y

kibana安裝在(10.170.13.1)這一臺設備用來統計分析展示。 這是做實驗,真實線上可以在安裝ES的設備上都安裝Kibana,用前端nginx做轉發跟認證。

編輯配置文件

vim /etc/kibana/kibana.yml    		#添加以下配置
    server.port: 5601
    server.host: "0.0.0.0"
    elasticsearch.url: http://10.170.13.1:9200/
    kibana.index: ".kibana"

啟動Kibana

~]# service kibana start

訪問 http://10.170.13.1:5601

四、logstash解耦之redis消息隊列

生產中有很多場景不能由logstash直接提取日誌發送給ES,這時候就可以使用消息隊列來做處理了。這就是所謂的解耦把日誌“提取”和“處理、展示”分隔開。

輸出至redis:

 ~]# vim /etc/logstash/conf.d/redis_in.conf
	input {
		stdin {}

	}	output {
		redis {
			host => "127.0.0.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "demo"
		}

	}

啟動:
 ~]# logstash -f /etc/logstash/conf.d/redis_in.conf(啟動成功以後收到在標準輸入裏面隨便輸入一些東西)

啟動完成可打開一個終端查看redis
 ~]# redis-cli 
	127.0.0.1:6379> SELECT 6
	OK
	127.0.0.1:6379[6]> LLEN demo
	(integer) 52
	127.0.0.1:6379[6]> KEYS *
	1) "demo"輸入至ES:

 ~]# vim /etc/logstash/conf.d/redis_out.conf
	input {
		redis {
			host => "127.0.0.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "demo"
		}
	}	output {
		elasticsearch {
			hosts => ["10.170.13.1:9200"]
			index => "redis-demo-%{+YYYY.MM.dd}"
		}

	}


啟動:
 ~]# logstash -f /etc/logstash/conf.d/redis_out.conf(啟動成功以後可查看ES裏面是否成功增加)

啟動完成可打開一個終端查看redis
 ~]# redis-cli 
	127.0.0.1:6379> SELECT 6
	OK	
    127.0.0.1:6379[6]> LLEN demo
    (integer) 0

可將之前做的rsystem日誌、nginx訪問日誌、ES日誌都通過logstash輸入redis,再有logstash從redis輸出到ES。

輸入:
			
input { 
        file {  
                path => "/var/log/messages"
                type => "system"
                start_position => "beginning"
        }
        
        syslog {                type => "system-syslog"
                host => "10.170.13.1"
                port => "514"
        }

        
        file {  
                path => "/var/log/elasticsearch/elasticsearch.log"
                type => "es-error"
                start_position => "beginning"
                 codec => multiline {
                    pattern => "^\["
                    negate => "true"
                    what => "previous"
                 }
        
        }
        
        file {  
                path => "/var/log/nginx/access-json.log"
                codec => "json"
                type => "nginx_access"
                start_position => "beginning"
        }
 
}


output {    if [type] == "system" {
		redis {
			host => "10.170.13.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "system"
		}
    }    if [type] == "system-syslog" {
		redis {
			host => "10.170.13.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "system-syslog"
		}
    }    if [type] == "es-error" {
		redis {
			host => "10.170.13.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "es-error"
		}

    }    if [type] == "nginx_access" {
		redis {
			host => "10.170.13.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "nginx_access"
		}
	}
}
輸入:

input {

		redis {			type => "system"
			host => "10.170.13.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "system"
			}

		redis {			type => "system-syslog"
			host => "10.170.13.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "system-syslog"
		}

		redis {			type => "es-error"
			host => "10.170.13.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "es-error"
		}

		redis {			type => "nginx_access"
			host => "10.170.13.1"
			port => "6379"
			db => "6"
			data_type => "list"
			key => "nginx_access"
		}	
}

output {    if [type] == "system" {
        elasticsearch {
                hosts => ["10.170.13.1:9200"]
                index => "system-%{+YYYY.MM.dd}"
        }
    }    if [type] == "system-syslog" {
        elasticsearch {
                hosts => ["10.170.13.1:9200"]
                index => "system-syslog-%{+YYYY.MM.dd}"
        }
    }    if [type] == "es-error" {
        elasticsearch {
                hosts => ["10.170.13.1:9200"]
                index => "es-error-%{+YYYY.MM.dd}"
        }

    }	
	if [type] == "nginx_access" {
            elasticsearch {
                    hosts => ["10.170.13.1:9200"]
                    index => "nginx_access-%{+YYYY.MM.dd}"
            }
        }
}

五、生產上線ELK:

1、日誌分類
	系統日誌  rsyslog	logstash	syslog插件
	訪問日誌  nginx  	logstash	codec json
	錯誤日誌  file		logstash 	file+  multiline
	運行日誌  file		logstash	codec  json
	設備日誌  syslog	        logstash	syslog插件	debug	  file		logstash	json  or  multiline	
2、日誌標準化
	路徑  固定
	格式  經量json
	
日誌系統:
ELK logstash
EFK Flume
EHK heka

消息隊列:
redis
rabbitmq
kafka


本文出自 “誌建” 博客,請務必保留此出處http://aoof188.blog.51cto.com/7661673/1949397

ELK的安裝配置使用