6.3.1版本elk+redis+filebeat收集docker+swarm日誌分析
可以參考上面的kafka+zookeeper配合elk收集,好了開始往上懟了;
Elk為了防止數據量突然鍵暴增,吧服務器搞奔潰,這裏需要添加一個redis,讓數據輸入到redis當中,然後在輸入到es當中
Redis安裝:
#!/bin/bash
# 6379 Redis-Server
tar zxf redis-3.0.0-rc5.tar.gz
yum install gcc-c++ make cmake -y
mkdir /usr/local/redis
mv redis-3.0.0-rc5 /usr/local/src/
cd /usr/local/src/redis-3.0.0-rc5 && make PREFIX=/usr/local/redis install && make install
cp /usr/local/src/redis-3.0.0-rc5/utils/redis_init_script /etc/init.d/redis
mkdir /usr/local/redis/conf
cp /usr/local/src/redis-3.0.0-rc5/redis.conf /usr/local/redis/conf/6379.conf
sed -i 's|CONF="/etc/redis/${REDISPORT}.conf"|CONF="/usr/local/redis/conf/${REDISPORT}.conf"|g' /etc/init.d/redis
sed -i 's|EXEC=/usr/local/bin/redis-server|EXEC=/usr/local/redis/bin/redis-server|g' /etc/init.d/redis
sed -i 's|CLIEXEC=/usr/local/bin/redis-cli|CLIEXEC=/usr/local/redis/bin/redis-cli|g' /etc/init.d/redis
sed -i 's|pidfile /var/run/redis.pid|pidfile /var/run/redis_6379.pid|g' /usr/local/redis/conf/6379.conf
sed -i 's|dir ./|dir /usr/local/redis/conf|g' /usr/local/redis/conf/6379.conf
/etc/init.d/redis start &
sleep 5
然後修改點配置
然後重新啟動服務:
接下來開始配置filebeat的配置文件:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/docker-nginx/access_json.log
fields:
type: 192.168.9.36-nginx
- type: log
enabled: true
paths:
- /var/log/docker-tomcat/catalina.out
fields:
type: 192.168.9.36-tomcat
# include_lines: ['ERROR','WARN']
# exclude_lines: ['DEBUG']
output.redis:
enabled: true
hosts: ["192.168.9.142"]
port: 6379
db: 0
timeout: 6s
key: "192.168.9.36"
max_retries: 3
然後啟動:
nohup ./filebeat -e -c filebeat.yml > /dev/null &
註意如果要測試一下數據有沒有進入redis當中,就不要啟動logstash,不然會直接從redis當中取走了,所以這裏不啟動,可以訪問一下容器
在redis當中查看:
如果要是啟動了logstash那麽數據就唄取走了:
input {
redis{
data_type => "list"
host => "192.168.9.142"
port => "6379"
key => "192.168.9.36"
}
}
filter{
date{
match=>["logdate","MMM dd HH:mm:ss yyyy"]
target=>"@timestamp"
timezone=>"Asia/Shanghai"
}
ruby{
code =>"event.timestamp.time.localtime+8*60*60"
}
}
output {
elasticsearch {
hosts => ["http://192.168.9.142:9200"]
#默認使用[@metadata][beat]來區分不同的索引,也就是filebeat中配置的index字段
#index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
#我使用自定義fields字段來區分不同的索引
index => "%{[fields][type]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}
啟動可以看到redis中沒有數據了
其實數據已經到了es當中,在kibana 當中可以顯示出來了
6.3.1版本elk+redis+filebeat收集docker+swarm日誌分析