1. 程式人生 > >elk日誌系統安裝部署

elk日誌系統安裝部署

elk是什麼?

ELK是ElasticSearch、Logstash、Kibana三個應用的縮寫。 ElasticSearch簡稱ES,主要用來儲存和檢索資料。 Logstash主要用來往ES中寫入資料。 Kibana主要用來展示資料

elk系統架構圖

image

ElasticSearch

Elasticsearch 是一個分散式,實時,全文搜尋引擎。所有操作都是通過 RESTful 介面實現; 其底層實現是基於 Lucene全文搜尋引擎。資料以JSON文件的格式儲存索引,不需要預先規定正規化

  • ElasticSearch和傳統資料庫的術語對比
    image
  • 節點(node)是一個執行著的Elasticsearch例項。叢集(cluster)是一組具有相同cluster.name的節點集合,他們協同工作,共享資料並提供故障轉移和擴充套件功能,當然一個節點也可以組成一個叢集
  • 叢集中一個節點會被選舉為主節點(master),它將臨時管理叢集級別的一些變更,例如新建或刪除索引、增加或移除節點等。主節點不參與文件級別的變更或搜尋,這意味著在流量增長的時候,該主節點不會成為叢集的瓶頸。

Logstash

Logstash 是非常靈活的日誌收集工具,不侷限於向 Elasticsearch 匯入資料,可以定製多種輸入,輸出,及過濾轉換規則。

redis傳輸

Redis 伺服器通常都是用作 NoSQL 資料庫,不過 logstash 只是用來做訊息佇列

Kibana

Kibana 實時資料分析的工具

elk配置安裝

  • jdk安裝,需要安裝jdk-1.8.0以上版本,不然執行logstash會報錯

    • 使用yum進行安裝

      yum -y install java-1.8.0-openjdk
      
      vim /etc/profile
      JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.91-1.b14.el6.x86_64/jre
      export JAVA_HOME 
      
      source /etc/profile
  • elasticsearch安裝配置

    wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.noarch.rpm
    rpm -ivh elasticsearch-1.7.1.noarch
    .rpm 啟動:/etc/init.d/elasticsearch start 安裝外掛: 1./usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head 2./usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf 報錯時:Failed: SSLException[java.security.ProviderException: java.security.KeyException]; nested: ProviderException[java.security.KeyException]; nested: KeyException; 解決: yum upgrade nss
    • 配置elasticsearch.yml,並對/etc/init.d/elasticsearch下的LOG_DIR和DATA_DIR路徑進行相應的修改

      cluster.name: elk-local
      node.name: node-1
      path.data: /file2/elasticsearch/data
      path.logs: /file2/elasticsearch/logs
      bootstrap.mlockall: true
      network.host: 0.0.0.0
      http.port: 9200
      discovery.zen.ping.unicast.hosts: ["192.168.1.16"]
      
  • logstash安裝配置

    • rpm安裝,下載地址

      wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.3.4-1.noarch.rpm
      安裝:rpm -ivh logstash-2.3.4-1.noarch.rpm 
      啟動:/etc/init.d/logstash start
      ln -s /opt/logstash/bin/logstash /usr/bin/logstash
    • logstash配置

      • elk日誌系統配置部署的核心在於日誌的收集和整理,就是logstash這一部分,需要配置最多的也是這裡。尤其是filter{}部分裡面的grok正則匹配,根據自己日誌格式和所需資料進行分隔提取
      • 路徑:/etc/logstash/conf.d/下以.conf結尾

      • 配置主要分為三部分:input(輸入),filter(過濾),output(輸出)
        (ps:如果以%{type}作為索引名稱,則type裡不應含有特殊字元)
    • logstash配置例項

      • 角色:shipper為192.168.1.13;broker,indexer,search&storage都為192.168.1.16
      • broker:首先安裝好redis,並啟動
      • shipper:只負責收集資料,不用處理,配置較簡單,其他臺配置同理

        input {
            file {
                path => "/web/nginx/logs/www.log"
                type => "nginx-log"
                start_position => "beginning"
            }
        
        }
        output {
                if [type] == "nginx-log" {
                redis {
                        host => "192.168.1.16"
                        port  => "6379"
                        data_type => "list"
                        key => "nginx:log"
                 }
            }
        }
      • indexer,search&storage:收集shipper傳來的日誌,處理好格式後輸出到es

        input {
                redis {
                        host => "192.168.1.16"
                        port => 6379
                        data_type => "list"
                        key => "nginx:log"
                        type => "nginx-log"
                }
        }
        filter {
            grok {
                match => {
                    "message" => "%{IPORHOST:clientip} - %{NOTSPACE:remote_user} \[%{HTTPDATE:timestamp}\]\ \"(?:%{WORD:method} %{NOTSPACE:request}(?: %{URIPROTO:proto}/%{NUMBER:httpversion})?|%
        {DATA:rawrequest})\" %{NUMBER:status} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} (%{WORD:x_forword}|-)- (%{NUMBER:request_time}) -- (%{NUMBER:upstream_response_time}) -- %{IPORHOST
        :domain} -- (%{WORD:upstream_cache_status}|-)"
                }
            }
        }
        output {
                if [type] == "nginx-log" {
                        elasticsearch {
                                hosts => ["192.168.1.16:9200"]
                                index => "nginx-%{+YYYY.MM.dd}"
                        }
                }
        }
        • ps:當以%{type}作為索引名稱的時候,tpye裡不能有特殊字元
    • 測試配置正確性/啟動

      測試:/opt/logstash/bin/logstash -f /etc/logstash/conf.d/xx.conf -t
      
      啟動:service logstash start
  • Kibana 安裝

    wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
    tar zxvf https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
    • 配置啟動項(/etc/init.d/kibana )

      
      #!/bin/bash     
      
      
      ### BEGIN INIT INFO
      
      
      # Provides:          kibana
      
      
      # Default-Start:     2 3 4 5
      
      
      # Default-Stop:      0 1 6
      
      
      # Short-Description: Runs kibana daemon
      
      
      # Description: Runs the kibana daemon as a non-root user
      
      
      ### END INIT INFO
      
      
      
      # Process name
      
      NAME=kibana
      DESC="Kibana4"
      PROG="/etc/init.d/kibana"
      
      
      # Configure location of Kibana bin
      
      KIBANA_BIN=/vagrant/elk/kibana-4.1.1-linux-x64/bin               #注意路徑
      
      
      # PID Info
      
      PID_FOLDER=/var/run/kibana/
      PID_FILE=/var/run/kibana/$NAME.pid
      LOCK_FILE=/var/lock/subsys/$NAME
      PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN
      DAEMON=$KIBANA_BIN/$NAME
      
      
      # Configure User to run daemon process
      
      DAEMON_USER=root
      
      # Configure logging location
      
      KIBANA_LOG=/var/log/kibana.log
      
      
      # Begin Script
      
      RETVAL=0
      
      if [ `id -u` -ne 0 ]; then
              echo "You need root privileges to run this script"
              exit 1
      fi
      
      
      # Function library
      
      . /etc/init.d/functions
      
      start() {
              echo -n "Starting $DESC : "
      
      
      pid=`pidofproc -p $PID_FILE kibana`
              if [ -n "$pid" ] ; then
                      echo "Already running."
                      exit 0
              else
              # Start Daemon
      if [ ! -d "$PID_FOLDER" ] ; then
                              mkdir $PID_FOLDER
                      fi
      daemon --user=$DAEMON_USER --pidfile=$PID_FILE $DAEMON 1>"$KIBANA_LOG" 2>&1 &
                      sleep 2
                      pidofproc node > $PID_FILE
                      RETVAL=$?
                      [[ $? -eq 0 ]] && success || failure
      echo
                      [ $RETVAL = 0 ] && touch $LOCK_FILE
                      return $RETVAL
              fi
      }
      
      
      reload()
      {
          echo "Reload command is not implemented for this service."
          return $RETVAL
      }
      
      
      stop() {
              echo -n "Stopping $DESC : "
              killproc -p $PID_FILE $DAEMON
              RETVAL=$?
      echo
              [ $RETVAL = 0 ] && rm -f $PID_FILE $LOCK_FILE
      }
      
      case "$1" in
        start)
              start
      ;;
        stop)
              stop
              ;;
        status)
              status -p $PID_FILE $DAEMON
              RETVAL=$?
              ;;
        restart)
              stop
              start
              ;;
        reload)
      reload
      ;;
        *)
      
      # Invalid Arguments, print the following message.
      
              echo "Usage: $0 {start|stop|status|restart}" >&2
      exit 2
              ;;
      esac
    • 由於kibana存放kibana新增驗證(nginx下實現)

      1.yum install -y httpd  #如果已安裝此步驟可忽略
      2.確定htpasswd位置( whereis htpasswd)
      htpasswd: /usr/bin/htpasswd /usr/share/man/man1/htpasswd.1.gz
      3.生成密碼檔案
      /usr/bin/htpasswd -c /web/nginx/conf/elk/authdb elk
      New password:根據提示輸入兩次密碼即可,密碼存放在authdb裡
      4.nginx新增elk配置/web/nginx/conf/elk/elk.conf
      server {
              listen          80;
              server_name     www.elk.com;
              charset         utf8;
      
              location / {
                      proxy_pass http://192.168.1.16$request_uri;
                      proxy_set_header   Host   $host;
                      proxy_set_header   X-Real-IP   $remote_addr;
                      proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
                      auth_basic "Authorized users only";
                      auth_basic_user_file /web/nginx/conf/elk/authdb;
               }
      }
      server {
              listen          80;
              server_name     www.es.com;
              charset         utf8;
      
              location / {
                      proxy_pass http://192.168.1.16:9200$request_uri;
                      proxy_set_header   Host   $host;
                      proxy_set_header   X-Real-IP   $remote_addr;
                      proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
                      auth_basic "Authorized users only";
                      auth_basic_user_file /web/nginx/conf/elk/authdb;
               }
      }