流式計算--整合kafka+flume+storm
1.資料流向
日誌系統=>flume=>kafka=>storm
2.安裝flume
1.我們在storm01上安裝flume1.6.0,上傳安裝包
2.解壓到 /export/servers/flume,首先建立資料夾flume
命令: sudo tar -zxvf apache-flume-1.6.0-bin.tar.gz -C /export/servers/flume/
3.配置採集檔案,在conf目錄下建立一個myconf資料夾
4.繼續在myconf資料夾下建立配置檔案
配置檔案的內容:主要是監聽日誌檔案
agent.sinks = k1 agent.sources = s1 agent.sources = s1 agent.channels = c1 agent.sinks = k1 agent.sources.s1.type=exec agent.sources.s1.command=tail -F /export/data/flume_source/click_log/1.log agent.sources.s1.channels=c1 agent.channels.c1.type=memory agent.channels.c1.capacity=10000 agent.channels.c1.transactionCapacity=100 #設定Kafka接收器 agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink #設定Kafka的broker地址和埠 agent.sinks.k1.brokerList=kafka01:9092 #設定Kafka的Topic agent.sinks.k1.topic=orderMq #設定序列化方式 agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder agent.sinks.k1.channel=c1
準備監聽的目錄:
/export/data/flume_source/click_log
5.啟動命令:
bin/flume-ng agent -n agent -c ./conf -f ./conf/myconf/exec.conf -Dflume.root.logger=INFO,console
6. 測試資料從flume到kafka是否正確:
Kafka的shell消費:
bin/kafka-console-consumer.sh --zookeeper zk01:2181 --from-beginning --topic orderMq 不從開始消費而是從最大開始消費:
編寫模擬生產日誌資料的指令碼檔案:
vi click_log_out.sh
for((i=0;i<50000;i++));
do echo "message-"+$i >>/export/data/flume_source/click_log/1.log;
done
Sudo chmod u+x click_log_out.sh
執行指令碼:sh click_log_out.sh 另外一邊的customer就在開始採集資料了:
說明資料從flume到kafka是沒有問題的
3.資料從kafka到Storm
新建maven工程:新增依賴
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.wx</groupId>
<artifactId>stormkafka</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<!--storm的以來包-->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-core</artifactId>
<version>0.9.5</version>
<!--<scope>provided</scope>-->
</dependency>
<!--KafkaSpout的依賴包,這個就可以把kafka的資料流到storm-->
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>0.9.5</version>
<!-- <exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</exclusion>
</exclusions>-->
</dependency>
<dependency>
<groupId>org.clojure</groupId>
<artifactId>clojure</artifactId>
<version>1.5.1</version>
</dependency>
<!--kafka的依賴包-->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.8.2</artifactId>
<version>0.8.1</version>
<exclusions>
<exclusion>
<artifactId>jmxtools</artifactId>
<groupId>com.sun.jdmk</groupId>
</exclusion>
<exclusion>
<artifactId>jmxri</artifactId>
<groupId>com.sun.jmx</groupId>
</exclusion>
<exclusion>
<artifactId>jms</artifactId>
<groupId>javax.jms</groupId>
</exclusion>
<exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</exclusion>
</exclusions>
</dependency>
<!-- <dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.4</version>
</dependency>
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.7.3</version>
</dependency>-->
</dependencies>
<build>
<plugins>
<plugin>
<!--打包的時候把專案其他依賴的一些jar,一些類打成一個整體-->
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass>cn.itcast.bigdata.hadoop.mapreduce.wordcount.WordCount</mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
編寫一個topology:
package com.wx.kafkaandstorm;
import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.StormSubmitter;
import backtype.storm.topology.TopologyBuilder;
import storm.kafka.KafkaSpout;
import storm.kafka.SpoutConfig;
import storm.kafka.ZkHosts;
public class KafkaAndStormTopologyMain {
public static void main(String[] args) throws Exception{
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("kafkaSpout",
new KafkaSpout(new SpoutConfig(
new ZkHosts("192.168.25.130:2181,192.168.25.131:2181,192.168.25.132:2181"),
"orderMq",
"/myKafka",
"kafkaSpout")),1);
topologyBuilder.setBolt("mybolt1",new ParserOrderMqBolt(),1).shuffleGrouping("kafkaSpout");
Config config = new Config();
config.setNumWorkers(1);
//3、提交任務 -----兩種模式 本地模式和叢集模式
if (args.length>0) {
StormSubmitter.submitTopology(args[0], config, topologyBuilder.createTopology());
}else {
LocalCluster localCluster = new LocalCluster();
localCluster.submitTopology("storm2kafka", config, topologyBuilder.createTopology());
}
}
}
編寫一個Bolt接收來自kafka的資料
package com.wx.kafkaandstorm;
import backtype.storm.task.OutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.base.BaseRichBolt;
import backtype.storm.tuple.Tuple;
import java.util.Map;
public class ParserOrderMqBolt extends BaseRichBolt {
@Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {
}
@Override
public void execute(Tuple tuple) {
Object o=tuple.getValue(0);
System.out.printf(new String((byte[]) tuple.getValue(0))+"\n\t");
}
@Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
}
}
啟動:
4.聯合測試:
啟動日誌生產指令碼:
kafka可以消費到資料:
資料也可以從kafka流到storm: