Flume+Kafka+SparkStreaming+Hbase+可視化(一)
阿新 • • 發佈:2018-07-25
日誌導入 ash channels style 導入 com system ase spark 一、前置準備:
Linux命令基礎
Scala、Python其中一門
Hadoop、Spark、Flume、Kafka、Hbase基礎知識
二、分布式日誌收集框架Flume
業務現狀分析:服務器、web服務產生的大量日誌,怎麽使用,怎麽將大量日誌導入到集群
1、shell腳本批量,再傳到Hdfs:實效性不高,容錯率低,網絡/磁盤IO,監控
2、Flume:
Flume:關鍵在於寫配置文件
1)配置 agent
2)配置 Source
3)配置 Channel
4)配置 Sink
1-netcat-mem-logger.conf :監聽端口數據
啟動 flume-ng agent \
-n a1 \
-c conf -f ./1-netcat-mem-logger.conf \
-Dflume.root.logger=INFO,console
exec-mem-logger.conf :監控文件
flume-ng agent \
-n a1 \
-c conf -f ./4-exec-mem-logger.conf \
-Dflume.root.logger=INFO,console
日誌收集過程:
1. 日誌服務器,啟動agent,exec-source, memory-channel,avro-sink(數據服務器), 將收集到的日誌數據,寫到數據服務器
2. 數據服務器,啟動agent,avro-aource,memory-channel,logger-sink/kafka-sink
conf1:exec-mem-avro.conf
conf2:avro-mem-logger.conf
#example for source=netcat, channel=memory, sink=logger # Name the components on this agent a1.sources = r1 a1.channels = c1 a1.sinks = k1 # configure for sources a1.sources.r1.type = netcat a1.sources.r1.bind = localhost a1.sources.r1.port = 44444 # configure for channels a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # configure for sinks a1.sinks.k1.type = logger # configure a1.sinks.k1.channel = c1 a1.sources.r1.channels = c1
# Name the components on this agent a1.sources = r1 a1.channels = c1 a1.sinks = k1 # configure for sources a1.sources.r1.type = exec a1.sources.r1.command = tail -F /opt/datas/flume_data/exec_tail.log # configure for channels a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # configure for sinks a1.sinks.k1.type = logger a1.sinks.k1.channel = c1 a1.sources.r1.channels = c1
# Name the components on this agent a1.sources = exec-source a1.channels = memory-channel a1.sinks = avro-sink # configure for sources a1.sources.exec-source.type = exec a1.sources.exec-source.command = tail -F /opt/datas/log-collect-system/log_server.log # configure for channels a1.channels.memory-channel.type = memory a1.channels.memory-channel.capacity = 1000 a1.channels.memory-channel.transactionCapacity = 100 # configure for sinks a1.sinks.avro-sink.type = avro a1.sinks.avro-sink.hostname = localhost a1.sinks.avro-sink.port = 44444 # configure a1.sinks.avro-sink.channel = memory-channel a1.sources.exec-source.channels = memory-channel
# Name the components on this agent a1.sources = avro-source a1.channels = memory-channel a1.sinks = logger-sink # configure for sources a1.sources.avro-source.type = avro a1.sources.avro-source.bind = localhost a1.sources.avro-source.port = 44444 # configure for channels a1.channels.memory-channel.type = memory a1.channels.memory-channel.capacity = 1000 a1.channels.memory-channel.transactionCapacity = 100 # configure for sinks a1.sinks.logger-sink.type = logger # configure a1.sinks.logger-sink.channel = memory-channel a1.sources.avro-source.channels = memory-channel(非常重要!!!)啟動順序:先啟動exec-mem-avro.conf再啟動exec-mem-avro.conf
Flume+Kafka+SparkStreaming+Hbase+可視化(一)