1. 程式人生 > >nginx+flume+hdfs搭建實時日誌收集系統

nginx+flume+hdfs搭建實時日誌收集系統

1、配置nginx.conf,新增以下配置

http {
    #配置日誌格式
    log_format lf '$remote_addr^A$msec^A$http_host^A$request_uri';
    server {
        listen       80;
        server_name  localhost;  
    location / {             
        access_log /home/bxp/Documents/install/tengine-2.2.0/log/nginx/access.log lf;
        root   html;
    }
    }
}

2、重啟nginx

systemctl restart nginx

3、建立flume agent檔案,內容如下

#define agent
agent.sources = r2
agent.channels = c2
agent.sinks = k2

# defined source 
agent.sources.r2.type = exec
agent.sources.r2.command = tail -f /home/bxp/Documents/install/tengine-2.2.0/log/nginx/access.log
agent.sources.r2.shell = /bin/bash -c

# defined channel
agent.channels.c2.type = memory #設定channel的容量 agent.channels.c2.capacity = 1000 #設定sink每次從channel中拉取的event的數量 agent.channels.c2.transactionCapacity = 100 # defined sinks agent.sinks.k2.type = hdfs agent.sinks.k2.hdfs.path = hdfs://hadoop-series.bxp.com:8020/user/bxp/flume/tracker/%Y/%m/%d agent.sinks.k2.hdfs.fileType
= DataStream agent.sinks.k2.hdfs.writeFormat = Text agent.sinks.k2.hdfs.batchSize = 10 agent.sinks.k2.hdfs.useLocalTimeStamp = true #bind the sources and sink to the channel agent.sources.r2.channels = c2 agent.sinks.k2.channel = c2

4、將hdfs和flume整合的jar拷貝到flume的lib目錄下,需要的jia如下

{HADOOP_HOME}/share/hadoop/common/lib/commons-configuration-1.6.jar
{HADOOP_HOME}/share/hadoop/common/lib/hadoop-auth-2.6.0-cdh5.10.0.jar
{HADOOP_HOME}/share/hadoop/common/hadoop-common-2.6.0-cdh5.10.0.jar
{HADOOP_HOME}/share/hadoop/hdfs/hadoop-hdfs-2.6.0-cdh5.10.0.jar

5、啟動HDFS

6、啟動flume

bin/flume-ng agent --conf conf/  --name agent --conf-file conf/agent-commad.conf -Dflume.root.logger=DEBUG,console