1. 程式人生 > >Fluentd日誌處理-tail拉取(三)

Fluentd日誌處理-tail拉取(三)

inter socket 直接 更新 agen head 處理 fluentd elastic

利用tail內置插件來獲取日誌

tail插件相當於tail -f,它會不斷的獲取更新的日誌,

<source>
    @type     tail
    path      /log-dir/*-app.log
    pos_file  /log-dir/app.log.pos
    tagidaas
    refresh_interval 10s
    read_from_head true
    path_key path
    <parse>
            @type json      #把日誌格式直接解析為json格式
    </parse>
</source>
<source>
  @type     tail
  path      /log-dir/*-debug.log
  pos_file  /log-dir/debug.log.pos
  tagdebug
  multiline_flush_interval 2s
  read_from_head true
  path_key path
    <parse>
            @type   multiline               #multiline 相當於logstash的multiline 
            format_firstline /^(?<level>(INFO|WARN|ERROR)+)/
            format1 /(?<level>[a-zA-Z]+)\s*\[(?<date>[0-9/\-: ,]+)\] (?<logger>[a-zA-Z\.]+):(?<message>[\d\D\s]+)/
    </parse>
</source>
<source>
    @type     tail
    path      /log-dir/*-requests.log
    pos_file  /log-dir/request.log.pos
    tagrequest
    refresh_interval 10s
    read_from_head true
    path_key path
    <parse>
        @type regexp
        expression /(?<message>.*)/
    </parse>
</source>

第一個filter是為日誌添加字段,tag和宿主機的名字,這個可能需要調docker,直接取只會取到docker的ID

<filter *>
    @type record_transformer             
    <record>
        tag ${tag}
        hostname "#{Socket.gethostname}"
    </record>
</filter>
    <filter request>
    @type    grep                         #排除掉一些不需要的日誌
    <exclude>
        key message
        pattern /.*healthcheck.*|.*prometheusMetrics.*|.*(v1+\/)+(configurations)+(\/+versions).*/
    </exclude>
</filter>
<filter request>
    @type parser
    key_name message
    reserve_data yes
    <parse>   
        @type regexp
        expression  /(?<ip>[^|]+)\|(?<date>[^|]+)\|(?<statusCode>[^|]+)\|(?<contentLength>[^|]+)\|(?<reqURI>[^|]+)\|(?<referer>[^|]+)\|(?<userAgent>[^|]+)\|(?<reqId>[^|]+)\|(?<internalIp>[^|]+)\|(?<reqHost>[^|]+)\|(?<reqOrigin>[^|]+)\|(?<reqTime>[^|]+) \|.*\|(?<requestMethod>[\w]+)/
    </parse>
</filter>
<match idaas>
    @type rewrite_tag_filter        #重寫tag,匹配的重寫tag為app.token,不匹配的重寫標app.idaas
    <rule>
        key     thread_name
        pattern /token/
        tag     app.token
    </rule>
    <rule>
         key     thread_name
         pattern /token/
         tag     app.idaas
         invert  true
    </rule>
</match>

上面已經把idaas進行分流處理,這裏我們把app.token進行一次過濾,然後和app.idaas一起輸入到ES中

<filter app.token>
    @type parser
    key_name thread_name
    reserve_data yes
    <parse>
        @type regexp
        expression /(?<thread_name>[A-Za-z0-9\.\-_=/\? ]+\.)/
    </parse>
</filter>
<match request>
    @type elasticsearch
    host elasticsearchlog-lb.elasticsearch-log
    index_name    s3-fluentd-request
    type_name     s3-fluentd-request
    flush_interval 2s
    include_timestamp true
    ssl_verify    false
</match>
<match debug>
    @type elasticsearch
    host elasticsearchlog-lb.elasticsearch-log
    index_name    s3-fluentd-debug
    type_name     s3-fluentd-debug
    flush_interval 2s
    include_timestamp true
    ssl_verify    false
</match>
<match app.*>
    @type elasticsearch
    host elasticsearchlog-lb.elasticsearch-log
    index_name    s3-fluentd-idaas
    type_name     s3-fluentd-idaas
    flush_interval 2s
    include_timestamp true
    ssl_verify    false
</match>

Fluentd日誌處理-tail拉取(三)