1. 程式人生 > >elk實時日誌分析平臺部署搭建詳細實現過程:加上個人實踐意見,及如何避坑

elk實時日誌分析平臺部署搭建詳細實現過程:加上個人實踐意見,及如何避坑

/*
 * Copyright 2002-2012 the original author or authors.
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.springframework.jdbc.datasource.lookup;

import java.sql.Connection;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.Map;
import javax.sql.DataSource;

import org.springframework.beans.factory.InitializingBean;
import org.springframework.jdbc.datasource.AbstractDataSource;
import org.springframework.util.Assert;

/**
 * Abstract {@link javax.sql.DataSource} implementation that routes {@link #getConnection()}
 * calls to one of various target DataSources based on a lookup key. The latter is usually
 * (but not necessarily) determined through some thread-bound transaction context.
 *
 * @author Juergen Hoeller
 * @since 2.0.1
 * @see #setTargetDataSources
 * @see #setDefaultTargetDataSource
 * @see #determineCurrentLookupKey()
 */
public abstract class AbstractRoutingDataSource extends AbstractDataSource implements InitializingBean {

	private Map<Object, Object> targetDataSources;

	private Object defaultTargetDataSource;

	private boolean lenientFallback = true;

	private DataSourceLookup dataSourceLookup = new JndiDataSourceLookup();

	private Map<Object, DataSource> resolvedDataSources;

	private DataSource resolvedDefaultDataSource;


	/**
	 * Specify the map of target DataSources, with the lookup key as key.
	 * The mapped value can either be a corresponding {@link javax.sql.DataSource}
	 * instance or a data source name String (to be resolved via a
	 * {@link #setDataSourceLookup DataSourceLookup}).
	 * <p>The key can be of arbitrary type; this class implements the
	 * generic lookup process only. The concrete key representation will
	 * be handled by {@link #resolveSpecifiedLookupKey(Object)} and
	 * {@link #determineCurrentLookupKey()}.
	 */
	public void setTargetDataSources(Map<Object, Object> targetDataSources) {
		this.targetDataSources = targetDataSources;
	}

	/**
	 * Specify the default target DataSource, if any.
	 * <p>The mapped value can either be a corresponding {@link javax.sql.DataSource}
	 * instance or a data source name String (to be resolved via a
	 * {@link #setDataSourceLookup DataSourceLookup}).
	 * <p>This DataSource will be used as target if none of the keyed
	 * {@link #setTargetDataSources targetDataSources} match the
	 * {@link #determineCurrentLookupKey()} current lookup key.
	 */
	public void setDefaultTargetDataSource(Object defaultTargetDataSource) {
		this.defaultTargetDataSource = defaultTargetDataSource;
	}

	/**
	 * Specify whether to apply a lenient fallback to the default DataSource
	 * if no specific DataSource could be found for the current lookup key.
	 * <p>Default is "true", accepting lookup keys without a corresponding entry
	 * in the target DataSource map - simply falling back to the default DataSource
	 * in that case.
	 * <p>Switch this flag to "false" if you would prefer the fallback to only apply
	 * if the lookup key was {@code null}. Lookup keys without a DataSource
	 * entry will then lead to an IllegalStateException.
	 * @see #setTargetDataSources
	 * @see #setDefaultTargetDataSource
	 * @see #determineCurrentLookupKey()
	 */
	public void setLenientFallback(boolean lenientFallback) {
		this.lenientFallback = lenientFallback;
	}

	/**
	 * Set the DataSourceLookup implementation to use for resolving data source
	 * name Strings in the {@link #setTargetDataSources targetDataSources} map.
	 * <p>Default is a {@link JndiDataSourceLookup}, allowing the JNDI names
	 * of application server DataSources to be specified directly.
	 */
	public void setDataSourceLookup(DataSourceLookup dataSourceLookup) {
		this.dataSourceLookup = (dataSourceLookup != null ? dataSourceLookup : new JndiDataSourceLookup());
	}


	@Override
	public void afterPropertiesSet() {
		if (this.targetDataSources == null) {
			throw new IllegalArgumentException("Property 'targetDataSources' is required");
		}
		this.resolvedDataSources = new HashMap<Object, DataSource>(this.targetDataSources.size());
		for (Map.Entry<Object, Object> entry : this.targetDataSources.entrySet()) {
			Object lookupKey = resolveSpecifiedLookupKey(entry.getKey());
			DataSource dataSource = resolveSpecifiedDataSource(entry.getValue());
			this.resolvedDataSources.put(lookupKey, dataSource);
		}
		if (this.defaultTargetDataSource != null) {
			this.resolvedDefaultDataSource = resolveSpecifiedDataSource(this.defaultTargetDataSource);
		}
	}

	/**
	 * Resolve the given lookup key object, as specified in the
	 * {@link #setTargetDataSources targetDataSources} map, into
	 * the actual lookup key to be used for matching with the
	 * {@link #determineCurrentLookupKey() current lookup key}.
	 * <p>The default implementation simply returns the given key as-is.
	 * @param lookupKey the lookup key object as specified by the user
	 * @return the lookup key as needed for matching
	 */
	protected Object resolveSpecifiedLookupKey(Object lookupKey) {
		return lookupKey;
	}

	/**
	 * Resolve the specified data source object into a DataSource instance.
	 * <p>The default implementation handles DataSource instances and data source
	 * names (to be resolved via a {@link #setDataSourceLookup DataSourceLookup}).
	 * @param dataSource the data source value object as specified in the
	 * {@link #setTargetDataSources targetDataSources} map
	 * @return the resolved DataSource (never {@code null})
	 * @throws IllegalArgumentException in case of an unsupported value type
	 */
	protected DataSource resolveSpecifiedDataSource(Object dataSource) throws IllegalArgumentException {
		if (dataSource instanceof DataSource) {
			return (DataSource) dataSource;
		}
		else if (dataSource instanceof String) {
			return this.dataSourceLookup.getDataSource((String) dataSource);
		}
		else {
			throw new IllegalArgumentException(
					"Illegal data source value - only [javax.sql.DataSource] and String supported: " + dataSource);
		}
	}


	@Override
	public Connection getConnection() throws SQLException {
		return determineTargetDataSource().getConnection();
	}

	@Override
	public Connection getConnection(String username, String password) throws SQLException {
		return determineTargetDataSource().getConnection(username, password);
	}

	@Override
	@SuppressWarnings("unchecked")
	public <T> T unwrap(Class<T> iface) throws SQLException {
		if (iface.isInstance(this)) {
			return (T) this;
		}
		return determineTargetDataSource().unwrap(iface);
	}

	@Override
	public boolean isWrapperFor(Class<?> iface) throws SQLException {
		return (iface.isInstance(this) || determineTargetDataSource().isWrapperFor(iface));
	}

	/**
	 * Retrieve the current target DataSource. Determines the
	 * {@link #determineCurrentLookupKey() current lookup key}, performs
	 * a lookup in the {@link #setTargetDataSources targetDataSources} map,
	 * falls back to the specified
	 * {@link #setDefaultTargetDataSource default target DataSource} if necessary.
	 * @see #determineCurrentLookupKey()
	 */
	protected DataSource determineTargetDataSource() {
		Assert.notNull(this.resolvedDataSources, "DataSource router not initialized");
		Object lookupKey = determineCurrentLookupKey();
		DataSource dataSource = this.resolvedDataSources.get(lookupKey);
		if (dataSource == null && (this.lenientFallback || lookupKey == null)) {
			dataSource = this.resolvedDefaultDataSource;
		}
		if (dataSource == null) {
			throw new IllegalStateException("Cannot determine target DataSource for lookup key [" + lookupKey + "]");
		}
		return dataSource;
	}

	/**
	 * Determine the current lookup key. This will typically be
	 * implemented to check a thread-bound transaction context.
	 * <p>Allows for arbitrary keys. The returned key needs
	 * to match the stored lookup key type, as resolved by the
	 * {@link #resolveSpecifiedLookupKey} method.
	 */
	protected abstract Object determineCurrentLookupKey();

}

package com.saas.framework.util;

import java.util.HashMap;
import java.util.Map;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpSession;

import org.apache.poi.ss.formula.functions.T;
import org.apache.shiro.SecurityUtils;
import org.apache.shiro.session.Session;
import org.apache.shiro.subject.Subject;
import org.apache.shiro.subject.support.SubjectThreadState;
import org.springframework.web.context.ContextLoader;
import org.springframework.web.context.WebApplicationContext;
import org.springframework.web.context.request.RequestAttributes;
import org.springframework.web.context.request.RequestContextHolder;
import org.springframework.web.context.request.ServletRequestAttributes;

import com.saas.crm.entity.DatacenterFormMap;
import com.saas.framework.util.datasource.ContextHolder;
import com.saas.framework.util.datasource.JDBCTest;
public class SpringIocUtils {
	private static Map<String, Object> beanFactoryMap =new HashMap<String, Object>();
	private static WebApplicationContext wac;
	@SuppressWarnings({ "unchecked", "hiding" })
	public static <T> T getBean(Class<T> clazz) {
		
		switchDatasource();
		
		wac = ContextLoader.getCurrentWebApplicationContext();
		String beanName = clazz.getSimpleName().substring(0,1).toLowerCase()+clazz.getSimpleName().substring(1);
		if(beanFactoryMap.containsKey(beanName)){
			return (T)beanFactoryMap.get(beanName);
		}else{
			T t =(T) wac.getBean(beanName);
			beanFactoryMap.put(beanName,t);
			return t;
		}
		
	}
	
	
	public static void switchDatasource() {
		// TODO
		Subject subject=SecurityUtils.getSubject();
		if (subject!=null) {
			Session session=subject.getSession();
			DatacenterFormMap datacenterFormMap = (DatacenterFormMap) session.getAttribute(session.getId());
			JDBCTest jdbcTest=(JDBCTest) session.getAttribute("jdbcTest");
			if (datacenterFormMap != null) {
				String cellphone = (String) datacenterFormMap.get("cellphone");
				if (cellphone != null) {
					if (cellphone.equals("admin")) {
						ContextHolder.setCustomerType(ContextHolder.DATA_SOURCE_A);
						return ;
					} else {
						String db_name=jdbcTest.getUrl();
						int index=db_name.lastIndexOf("/");
						db_name=db_name.substring(index+1);
						ContextHolder.setCustomerType(cellphone+db_name);
						return ;
					}
				}
			}else {
				ContextHolder.setCustomerType(ContextHolder.DATA_SOURCE_A);
			}
		}
	}
}


1、ELK平臺介紹

日誌主要包括系統日誌、應用程式日誌和安全日誌。系統運維和開發人員可以通過日誌瞭解伺服器軟硬體資訊、檢查配置過程中的錯誤及錯誤發生的原因。經常分析日誌可以瞭解伺服器的負荷,效能安全性,從而及時採取措施糾正錯誤。

通常,日誌被分散的儲存不同的裝置上。如果你管理數十上百臺伺服器,你還在使用依次登入每臺機器的傳統方法查閱日誌。這樣是不是感覺很繁瑣和效率低下。當務之急我們使用集中化的日誌管理,例如:開源的syslog,將所有伺服器上的日誌收集彙總。

集中化管理日誌後,日誌的統計和檢索又成為一件比較麻煩的事情,一般我們使用grep、awk和wc等

Linux命令能實現檢索和統計,但是對於要求更高的查詢、排序和統計等要求和龐大的機器數量依然使用這樣的方法難免有點力不從心。

開源實時日誌分析ELK平臺能夠完美的解決我們上述的問題,ELK由ElasticSearch、Logstash和Kiabana三個開源工具組成。官方網站: 

E:\u\elk\pic\01_1.png


2、安裝準備

Elk平臺環境

系統

版本

伺服器作業系統

Centos release 6.7 (Final)

ElasticSearch

2.3.4

Logstash

2.3.4

Kibana

4.5.3

Jdk

1.8

注:由於Logstash的執行依賴於Java環境,Logstash1.5以上版本不低於Java 1.7,因此推薦使用最新版本的Java。因為我們只需要Java的執行環境,所以可以只安裝JRE,不過這裡我依然使用JDK,請自行搜尋安裝,我這裡準備使用1.7


3、下載


從中獲得下載地址:

4、安裝除錯

4.1、安裝jdk

使用root安裝jdk

mkdir -p /usr/lib/jvm

tar -xvf  jdk-8u45-linux-x64.tar.gz -C /usr/lib/jvm

# vim /etc/profile 配置系統引數
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_45
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.8.0_45/bin/java 300

sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.8.0_45/bin/javac 300

4.2、安裝elasticsearch

使用elk賬號安裝elasticearch:

解壓縮安裝

useradd elk

su - elk

tar -xvf elasticsearch-2.3.4.tar.gz

cd elasticsearch-2.3.4

安裝Head外掛

./bin/plugin install mobz/elasticsearch-head

ls plugins/

# ls能看到head檔案即可表示ok了。

[[email protected]_test_dbm1_121_62 elasticsearch-2.3.4]$ ll plugins/

總用量 4

drwxrwxr-x. 5 elk elk 4096 8月   2 17:26 head

[[email protected]_test_dbm1_121_62 elasticsearch-2.3.4]$

編譯es的配置檔案:

cluster.name: es_cluster

node.name: node0

path.data: /home/elk/data

path.logs: /home/elk/logs

當前的host ip地址

network.host: 192.168.121.62

network.port: 9200

啟動es:

./bin/elasticsearch &

這裡可能出現的坑一

1、ELK平臺介紹

日誌主要包括系統日誌、應用程式日誌和安全日誌。系統運維和開發人員可以通過日誌瞭解伺服器軟硬體資訊、檢查配置過程中的錯誤及錯誤發生的原因。經常分析日誌可以瞭解伺服器的負荷,效能安全性,從而及時採取措施糾正錯誤。

通常,日誌被分散的儲存不同的裝置上。如果你管理數十上百臺伺服器,你還在使用依次登入每臺機器的傳統方法查閱日誌。這樣是不是感覺很繁瑣和效率低下。當務之急我們使用集中化的日誌管理,例如:開源的syslog,將所有伺服器上的日誌收集彙總。

集中化管理日誌後,日誌的統計和檢索又成為一件比較麻煩的事情,一般我們使用grep、awk和wc等Linux命令能實現檢索和統計,但是對於要求更高的查詢、排序和統計等要求和龐大的機器數量依然使用這樣的方法難免有點力不從心。

開源實時日誌分析ELK平臺能夠完美的解決我們上述的問題,ELK由ElasticSearch、Logstash和Kiabana三個開源工具組成。官方網站: 

E:\u\elk\pic\01_1.png


2、安裝準備

Elk平臺環境

系統

版本

伺服器作業系統

Centos release 6.7 (Final)

ElasticSearch

2.3.4

Logstash

2.3.4

Kibana

4.5.3

Jdk

1.8

注:由於Logstash的執行依賴於Java環境,Logstash1.5以上版本不低於Java 1.7,因此推薦使用最新版本的Java。因為我們只需要Java的執行環境,所以可以只安裝JRE,不過這裡我依然使用JDK,請自行搜尋安裝,我這裡準備使用1.7


3、下載


從中獲得下載地址:

4、安裝除錯

4.1、安裝jdk

使用root安裝jdk

mkdir -p /usr/lib/jvm

tar -xvf  jdk-8u45-linux-x64.tar.gz -C /usr/lib/jvm

# vim /etc/profile 配置系統引數
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_45
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.8.0_45/bin/java 300

sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.8.0_45/bin/javac 300

4.2、安裝elasticsearch

使用elk賬號安裝elasticearch:

解壓縮安裝

useradd elk

su - elk

tar -xvf elasticsearch-2.3.4.tar.gz

cd elasticsearch-2.3.4

安裝Head外掛

./bin/plugin install mobz/elasticsearch-head

ls plugins/

# ls能看到head檔案即可表示ok了。

[[email protected]_test_dbm1_121_62 elasticsearch-2.3.4]$ ll plugins/

總用量 4

drwxrwxr-x. 5 elk elk 4096 8月   2 17:26 head

[[email protected]_test_dbm1_121_62 elasticsearch-2.3.4]$

編譯es的配置檔案:

cluster.name: es_cluster

node.name: node0

path.data: /home/elk/data

path.logs: /home/elk/logs

當前的host ip地址

network.host: 192.168.121.62

network.port: 9200

啟動es:

./bin/elasticsearch &

看後臺日誌,發現它和其它的節點的傳輸埠為9300,而接受HTTP請求的埠為9200。日誌如下所示:

[[email protected]_test_dbm1_121_62 elasticsearch-2.3.4]$ more ../logs/es_cluster.log

[2016-08-02 17:47:23,285][WARN ][bootstrap                ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled

in

[2016-08-02 17:47:23,579][INFO ][node                     ] [node0] version[2.3.4], pid[21176], build[e455fd0/2016-06-30T11:24:31Z]

[2016-08-02 17:47:23,586][INFO ][node                     ] [node0] initializing ...

[2016-08-02 17:47:24,213][INFO ][plugins                  ] [node0] modules [reindex, lang-expression, lang-groovy], plugins [head], sites [head]

[2016-08-02 17:47:24,235][INFO ][env                      ] [node0] using [1] data paths, mounts [[/home (/dev/mapper/vg_dbmlslave1-lv_home)]], net usable_space [542.1gb], net total_space [10

17.2gb], spins? [possibly], types [ext4]

[2016-08-02 17:47:24,235][INFO ][env                      ] [node0] heap size [989.8mb], compressed ordinary object pointers [true]

[2016-08-02 17:47:24,235][WARN ][env                      ] [node0] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]

[2016-08-02 17:47:25,828][INFO ][node                     ] [node0] initialized

[2016-08-02 17:47:25,828][INFO ][node                     ] [node0] starting ...

[2016-08-02 17:47:25,939][INFO ][transport                ] [node0] publish_address {192.168.121.62:9300}, bound_addresses {192.168.121.62:9300}

[2016-08-02 17:47:25,944][INFO ][discovery                ] [node0] es_cluster/626_Pu5sQzy96m7P0EaU4g

[2016-08-02 17:47:29,028][INFO ][cluster.service          ] [node0] new_master {node0}{626_Pu5sQzy96m7P0EaU4g}{192.168.121.62}{192.168.121.62:9300}, reason: zen-disco-join(elected_as_master,

[0] joins received)

[2016-08-02 17:47:29,116][INFO ][http                     ] [node0] publish_address {192.168.121.62:9200}, bound_addresses {192.168.121.62:9200}

[2016-08-02 17:47:29,117][INFO ][node                     ] [node0] started

[2016-08-02 17:47:29,149][INFO ][gateway                  ] [node0] recovered [0] indices into cluster_state

[[email protected]_test_dbm1_121_62 elasticsearch-2.3.4]$

 

看返回結果,有配置的cluster_name、節點name資訊以及安裝的軟體版本資訊,其中安裝的head外掛,它是一個用瀏覽器跟ES叢集互動的外掛,可以檢視叢集狀態、叢集的doc內容、執行搜尋和普通的Rest請求等。可以使用web介面來操作檢視http://192.168.121.62:9200/_plugin/head/,如下圖E:\u\elk\pic\01_4.png:

可以從介面看到,當前的elas叢集裡面沒有index也沒有type,所以是空記錄。

4.3、安裝logstash

logstash其實它就是一個收集器而已,我們需要為它指定InputOutput(當然InputOutput可以為多個)。由於我們需要把Java程式碼中Log4j的日誌輸出到ElasticSearch中,因此這裡的Input就是Log4j,而Output就是ElasticSearch

結構圖如E:\u\elk\pic\02.png所示:

 

安裝配置:

解壓縮安裝

tar -xvf logstash-2.3.4.tar.gz

cd logstash-2.3.4

將配置檔案放置在config資料夾下面

mkdir config

vim config/log4j_to_es.conf

# For detail structure of this file

# Set: https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html

input {

  # For detail config for log4j as input,

  # See: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-log4j.html

  log4j {

    mode => "server"

    host => "192.168.121.62"

    port => 4567

  }

}

filter {

  #Only matched data are send to output.

}

output {

  # For detail config for elasticsearch as output,

  # See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html

  elasticsearch {

    action => "index"          #The operation on ES

    hosts  => "192.168.121.62:9200"   #ElasticSearch host, can be array.

    index  => "applog"         #The index to write data to.

  }

}

啟動logstash,2個引數一個是agent一個是配置檔案:

[[email protected]_test_dbm1_121_62 logstash-2.3.4]$ ./bin/logstash agent -f config/log4j_to_es.conf

Settings: Default pipeline workers: 32

log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Pipeline main started

接下來,可以使用logstash來收集日誌並儲存到es中了,可以使用一段java程式碼來實現它。

4.4elk3測試工程

工程環境是eclipse,工程大概目錄結構如下圖E:\u\elk\pic\04.png,一個java類Application.java,一個日誌配置檔案log4j.properties,一個排程配置檔案pom.xml:

1Application.java

package com.demo.elk;

import org.apache.log4j.Logger;

public class Application {

      private static final Logger LOGGER = Logger.getLogger(Application.class);

      public Application() {

           // TODO Auto-generated constructor stub

      }

      public static void main(String[] args) {

           // TODO Auto-generated method stub

        for (int i = 0; i < 10; i++) {

            LOGGER.error("Info log [" + i + "].");

            try {

                      Thread.sleep(500);

                 } catch (InterruptedException e) {

                      // TODO Auto-generated catch blockl

                      e.printStackTrace();

                 }

        }

      }

}

2Pom.xml:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>elk3</groupId>

  <artifactId>elk3</artifactId>

  <version>0.0.1-SNAPSHOT</version>

  <name>elk3</name>

  <dependency>

    <groupId>log4j</groupId>

    <artifactId>log4j</artifactId>

相關推薦

elk實時日誌分析平臺部署搭建詳細實現過程加上個人實踐意見如何

/* * Copyright 2002-2012 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you m

ELK實時日誌分析平臺部署搭建詳細實現過程

原文地址:http://www.linuxidc.com/Linux/2016-09/135137.htm 1、ELK平臺介紹 在搜尋ELK資料的時候,發現這篇文章比較好,於是摘抄一小段: 日誌主要包括系統日誌、應用程式日誌和安全日誌。系統運維和開發人員可以通過日

ELK(實時日誌分析平臺)搭建必備基礎知識-------logstash

預熱:基礎知識        Logstash 是一個開源的資料收集引擎,它具有備實時資料傳輸能力。它可以統一過濾來自不同源的資料,並按照開發者的制定的規範輸出到目的地。 顧名思義,Logstash 收集資料物件就是日誌檔案。由於日誌檔案來源多(如:系統日

ELK(實時日誌分析平臺)搭建必備基礎知識-------filebeat

預熱:基礎知識        Beats是elastic公司的一款輕量級資料採集產品,它包含了幾個子產品: packetbeat(用於監控網路流量)、 filebeat(用於監聽日誌資料,可以替代logstash-input-file)、

ELK實時日誌分析平臺環境部署--完整記錄(ElasticSearch+Logstash+Kibana )

在日常運維工作中,對於系統和業務日誌的處理尤為重要。今天,在這裡分享一下自己部署的ELK(+Redis)-開源實時日誌分析平臺的記錄過程(僅依據本人的實際操作為例說明,如有誤述,敬請指出)~================概念介紹================日誌主要包括系統日誌、應用程式日誌和安全日誌。系

ELK實時日誌分析平臺環境部署

原本打算構建和實踐基於ELK的實時日誌分析平臺的,偶然發現此文,甚是詳細和實用,便轉載作以記錄。 另,近期在使用公司RDS日誌監控架構時瞭解到,使用的日誌收集工具為filebeat,百度發現filebeat和logstach同出一源,較後者更輕量,詳見https://ww

23.ELK實時日誌分析平臺之Beats平臺搭建

在被監控的系統使用Beats平臺,要配合Elasticsearch、Logstash(如果需要的話)、Kibana共同使用。搭建該平臺要求在安裝Beat客戶端的機器上可以訪問到Elasticsearch、Logstash(如果有的話)以及Kibana伺服器。

ELK實時日誌分析平臺(elk+kafka+metricbeat)-logstash(四)

elk-logstash搭建1. 安裝並測試: 2. 添加配置: 3. 啟動檢查:本文出自 “linux” 博客,請務必保留此出處http://1054054.blog.51cto.com/1044054/1968431ELK實時日誌分析平臺(elk+kafka+metricbeat)-logs

19.ELK實時日誌分析平臺之Elasticsearch REST API簡介

Elasticsearch提供了一系列RESTful的API,覆蓋瞭如下功能: 檢查叢集、節點、索引的健康度、狀態和統計 管理叢集、節點、索引的資料及元資料 對索引進行CRUD操作及查詢操作 執行其他高階操作如分頁、排序、過濾等。 叢集資訊 使用_

ELK搭建實時日誌分析平臺(elk+kafka+metricbeat)-搭建說明

elk搭建實時日誌分析平臺數據流向:metricbeat->kafka->logstash->elasticsearch->kibana.應用分布:主機應用備註192.168.30.121java version "1.8.0_144"zookeeper-3.4.10.tar.gzka

ELK搭建實時日誌分析平臺(elk+kafka+metricbeat)-KAFKA搭建

kafka搭建(elk+kafka+metricbeat)一、kafka搭建建立elk目錄:mkdir /usr/loca/elk安裝zookeeper:192.168.30.121:192.168.30.122:192.168.30.123:3. kafka安裝: a. 192.168.30.121

利用ELK+Kafka解決方案搭建企業級實時日誌分析平臺

    ELK是三款軟體的組合。是一整套完整的解決方案。分別是由Logstash(收集+分析)、ElasticSearch(搜尋+儲存)、Kibana(視覺化展示)三款軟體。ELK主要是為了在海量的日誌系統裡面實現分散式日誌資料集中式管理和查詢,便於監控以及排查故障。

ELK(ElasticSearch, Logstash, Kibana)搭建實時日誌分析平臺筆記

1.ELK注意事項 1.1要求jdk1.8+ 1.2Elsearch不允許使用root啟動 1.3ELK三個包版本要一致 2.ELK下載地址 https://www.elastic.co/cn/downloads elasticsearch-6.1.1.tar.gz

ELK搭建實時日誌分析平臺

ELK 一、介紹 Elasticsearch是個開源分散式搜尋引擎,它的特點有:分散式,零配置,自動發現,索引自動分片,索引副本機制,restful風格介面,多資料來源,自動搜尋負載等。 Logstash是一個完全開源的工具,他可以對你的日誌進行收集、過濾

ELK(ElasticSearch, Logstash, Kibana)搭建實時日誌分析平臺

摘要: ELK平臺介紹 在搜尋ELK資料的時候,發現這篇文章比較好,於是摘抄一小段: 以下內容來自:http://baidu.blog.51cto.com/71938/1676798 日誌主要包括系統日誌、應用程式日誌和安全日誌。 ELK平臺介紹 在搜尋ELK資料的時候,

ELK(ElasticSearch, Logstash, Kibana)+ SuperVisor + Springboot + Logback 搭建實時日誌分析平臺

日誌主要包括系統日誌、應用程式日誌和安全日誌。系統運維和開發人員可以通過日誌瞭解伺服器軟硬體資訊、檢查配置過程中的錯誤及錯誤發生的原因。經常分析日誌可以瞭解伺服器的負荷,效能安全性,從而及時採取措施糾正錯誤。 通常,日誌被分散的儲存不同的裝置上。如果你管理數十

centos7搭建ELK Cluster日誌分析平臺(一)

場景 git centos7 beat images 下載地址 install posit src 應用場景:ELK實際上是三個工具的集合,ElasticSearch + Logstash + Kibana,這三個工具組合形成了一套實用、易用的監控架構,      很多公司

ELK日誌分析平臺部署實錄

linux elk [root@king01 ~]# rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch[root@king01 ~]# vim /etc/yum.repos.d/elasticsearch.repo[elas

Centos7下ELK+Redis日誌分析平臺的集群環境部署記錄

fire systemctl 系統 gpgcheck poc dt.jar 添加 大致 路徑 之前的文檔介紹了ELK的架構基礎知識,下面簡單記錄下ELK結合Redis搭建日誌分析平臺的集群環境部署過程,大致的架構如下: + Elasticsearch是一個分布式搜索分

Spring Cloud ELK+kafka日誌分析平臺(二) 優化

Spring Cloud ELK+kafka日誌分析平臺(二)優化 一、概述 在筆者的上一篇部落格介紹了Spring Cloud ELK+kafka日誌分析平臺的搭建,http://xuyangyang.club/articles/2018/05/24/15