測試將web日誌流檔案寫入hdfs的配置檔案
阿新 • • 發佈:2018-12-11
a1.sources = r1 a1.sinks = k1 a1.channels = c1 a1.sources.r1.type = spooldir a1.sources.r1.spoolDir =/home/hadoop/log a1.sources.r1.fileHeader = true a1.sinks.k1.type = hdfs a1.sinks.k1.channel = c1 a1.sinks.k1.hdfs.path = /flume/events/%Y-%m-%d/ a1.sinks.k1.hdfs.filePrefix = events- a1.sinks.k1.hdfs.round = true a1.sinks.k1.hdfs.roundValue = 10 a1.sinks.k1.hdfs.roundUnit = second a1.sinks.k1.hdfs.rollInterval = 2 a1.sinks.k1.hdfs.rollSize = 10240 a1.sinks.k1.hdfs.rollCount = 256 a1.sinks.k1.hdfs.batchSize = 50 a1.sinks.k1.hdfs.useLocalTimeStamp = true a1.sinks.k1.hdfs.fileType = DataStream a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1
是監控資料夾,看一下寫在hdfs之後的樣子,速度很快。
需要注意的是:我監控的是整個資料夾,寫完之後,flume會將這個資料夾的名字在後面加上一個.COMPLMETED
加上了這個字尾之後,下次就不會監控這個檔案夾了。我之前測試的時候,沒注意到這一點,發現一直都沒往hdfs上寫檔案,我還傻傻等半天。
後面我又修改了配置檔案,如果不修改的話,那麼生成太多的檔案,執行MapReduce程式會非常地卡。
a1.sources = r1 a1.sinks = k1 a1.channels = c1 a1.sources.r1.type = spooldir a1.sources.r1.spoolDir =/home/hadoop/log a1.sources.r1.fileHeader = true a1.sinks.k1.type = hdfs a1.sinks.k1.channel = c1 a1.sinks.k1.hdfs.path = /flume/events/%Y-%m-%d/ a1.sinks.k1.hdfs.filePrefix = events- a1.sinks.k1.hdfs.round = true a1.sinks.k1.hdfs.roundValue = 10 a1.sinks.k1.hdfs.roundUnit = second a1.sinks.k1.hdfs.rollInterval = 3 a1.sinks.k1.hdfs.rollSize = 1048576 a1.sinks.k1.hdfs.rollCount = 0 a1.sinks.k1.hdfs.batchSize = 50 a1.sinks.k1.hdfs.useLocalTimeStamp = true a1.sinks.k1.hdfs.fileType = DataStream a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1