1. 程式人生 > >使用flume從kafka中的topic取得資料,然後存入hbase和es中

使用flume從kafka中的topic取得資料,然後存入hbase和es中

接上一篇部落格,將資料進行處理!!!!!!!!!!!!

#HBASE

tier2.sources  = HbaseAuditSource HbaseRunSource HdfsAuditSources HdfsRunSources HiveAuditSources HiveRunSources StormWorkerSources StormRunSources YarnAuditSources YarnRunSources
tier2.channels = HbaseAuditChannel HbaseRunChannel HdfsAuditChannel HdfsRunChannel HiveAuditChannel HiveRunChannel StormWorkerChannel StormRunChannel YarnAuditChannel YarnRunChannel
tier2.sinks    = HbaseAuditSink HbaseRunSink HdfsAuditSink HdfsRunSink HiveAuditSink HiveRunSink StormWorkerSink StormRunSink YarnAuditSink YarnRunSink


tier2.sources.HbaseAuditSource.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.HbaseAuditSource.channels = HbaseAuditChannel
tier2.sources.HbaseAuditSource.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.HbaseAuditSource.topic = AUDIT_HBASE_WC
tier2.sources.HbaseAuditSource.groupId = flume
tier2.sources.HbaseAuditSource.batchSize=1
tier2.sources.HbaseAuditSource.kafka.consumer.timeout.ms = 100


tier2.sources.HbaseRunSource.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.HbaseRunSource.channels = HbaseRunChannel
tier2.sources.HbaseRunSource.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.HbaseRunSource.topic = RUN_HBASE_WC
tier2.sources.HbaseRunSource.groupId = flume
tier2.sources.HbaseRunSource.batchSize=1
tier2.sources.HbaseRunSource.kafka.consumer.timeout.ms = 100
tier2.sources.HbaseRunSource.interceptors = i1
tier2.sources.HbaseRunSource.interceptors.i1.type=regex_extractor
tier2.sources.HbaseRunSource.interceptors.i1.regex = serverip=(.*?),datatye=(.*?),([\\d\\-\\s:,]{23}).*  
tier2.sources.HbaseRunSource.interceptors.i1.serializers = s1 s2 s3  
tier2.sources.HbaseRunSource.interceptors.i1.serializers.s1.name= serverip   
tier2.sources.HbaseRunSource.interceptors.i1.serializers.s2.name= datatype
tier2.sources.HbaseRunSource.interceptors.i1.serializers.s3.name= time






tier2.sources.HdfsAuditSources.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.HdfsAuditSources.channels = HdfsAuditChannel
tier2.sources.HdfsAuditSources.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.HdfsAuditSources.topic = AUDIT_HDFS_WC
tier2.sources.HdfsAuditSources.groupId = flume
tier2.sources.HdfsAuditSources.batchSize=1
tier2.sources.HdfsAuditSources.kafka.consumer.timeout.ms = 100


tier2.sources.HdfsRunSources.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.HdfsRunSources.channels = HdfsRunChannel
tier2.sources.HdfsRunSources.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.HdfsRunSources.topic = RUN_HDFS_WC
tier2.sources.HdfsRunSources.groupId = flume
tier2.sources.HdfsRunSources.batchSize=1
tier2.sources.HdfsRunSources.kafka.consumer.timeout.ms = 100
tier2.sources.HdfsRunSources.interceptors = i1
tier2.sources.HdfsRunSources.interceptors.i1.type=regex_extractor
tier2.sources.HdfsRunSources.interceptors.i1.regex = serverip=(.*?),datatye=(.*?),([\\d\\-\\s:,]{23}).*  
tier2.sources.HdfsRunSources.interceptors.i1.serializers = s1 s2 s3  
tier2.sources.HdfsRunSources.interceptors.i1.serializers.s1.name= serverip   
tier2.sources.HdfsRunSources.interceptors.i1.serializers.s2.name= datatype
tier2.sources.HdfsRunSources.interceptors.i1.serializers.s3.name= time






tier2.sources.HiveAuditSources.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.HiveAuditSources.channels = HiveAuditChannel
tier2.sources.HiveAuditSources.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.HiveAuditSources.topic = AUDIT_HIVE_WC
tier2.sources.HiveAuditSources.groupId = flume
tier2.sources.HiveAuditSources.batchSize=1
tier2.sources.HiveAuditSources.kafka.consumer.timeout.ms = 100


tier2.sources.HiveRunSources.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.HiveRunSources.channels = HiveRunChannel
tier2.sources.HiveRunSources.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.HiveRunSources.topic = RUN_HIVE_WC
tier2.sources.HiveRunSources.groupId = flume
tier2.sources.HiveRunSources.batchSize=1
tier2.sources.HiveRunSources.kafka.consumer.timeout.ms = 100
tier2.sources.HiveRunSources.interceptors = i1
tier2.sources.HiveRunSources.interceptors.i1.type=regex_extractor
tier2.sources.HiveRunSources.interceptors.i1.regex = serverip=(.*?),datatye=(.*?),([\\d\\-\\s:,]{23}).*  
tier2.sources.HiveRunSources.interceptors.i1.serializers = s1 s2 s3  
tier2.sources.HiveRunSources.interceptors.i1.serializers.s1.name= serverip   
tier2.sources.HiveRunSources.interceptors.i1.serializers.s2.name= datatype
tier2.sources.HiveRunSources.interceptors.i1.serializers.s3.name= time






tier2.sources.StormWorkerSources.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.StormWorkerSources.channels = StormWorkerChannel
tier2.sources.StormWorkerSources.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.StormWorkerSources.topic = AUDIT_STORM_WC
tier2.sources.StormWorkerSources.groupId = flume
tier2.sources.StormWorkerSources.batchSize=1
tier2.sources.StormWorkerSources.kafka.consumer.timeout.ms = 100


tier2.sources.StormRunSources.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.StormRunSources.channels = StormRunChannel
tier2.sources.StormRunSources.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.StormRunSources.topic = RUN_STORM_WC
tier2.sources.StormRunSources.groupId = flume
tier2.sources.StormRunSources.batchSize=1
tier2.sources.StormRunSources.kafka.consumer.timeout.ms = 100
tier2.sources.StormRunSources.interceptors = i1
tier2.sources.StormRunSources.interceptors.i1.type=regex_extractor
tier2.sources.StormRunSources.interceptors.i1.regex = serverip=(.*?),datatye=(.*?),([\\d\\-\\s:.]{23}).*  
tier2.sources.StormRunSources.interceptors.i1.serializers = s1 s2 s3  
tier2.sources.StormRunSources.interceptors.i1.serializers.s1.name= serverip   
tier2.sources.StormRunSources.interceptors.i1.serializers.s2.name= datatype
tier2.sources.StormRunSources.interceptors.i1.serializers.s3.name= time




#YARN
tier2.sources.YarnAuditSources.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.YarnAuditSources.channels = YarnAuditChannel
tier2.sources.YarnAuditSources.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.YarnAuditSources.topic = AUDIT_YARN_WC
tier2.sources.YarnAuditSources.groupId = flume
tier2.sources.YarnAuditSources.batchSize=1
tier2.sources.YarnAuditSources.kafka.consumer.timeout.ms = 100


tier2.sources.YarnRunSources.type = org.apache.flume.source.kafka.KafkaSource
tier2.sources.YarnRunSources.channels = YarnRunChannel
tier2.sources.YarnRunSources.zookeeperConnect = *.*.*.*:2181/kafka-test
tier2.sources.YarnRunSources.topic = RUN_YARN_WC
tier2.sources.YarnRunSources.groupId = flume
tier2.sources.YarnRunSources.batchSize=1
tier2.sources.YarnRunSources.kafka.consumer.timeout.ms = 100
tier2.sources.YarnRunSources.interceptors = i1
tier2.sources.YarnRunSources.interceptors.i1.type=regex_extractor
tier2.sources.YarnRunSources.interceptors.i1.regex = serverip=(.*?),datatye=(.*?),([\\d\\-\\s:,]{23}).*  
tier2.sources.YarnRunSources.interceptors.i1.serializers = s1 s2 s3  
tier2.sources.YarnRunSources.interceptors.i1.serializers.s1.name= serverip   
tier2.sources.YarnRunSources.interceptors.i1.serializers.s2.name= datatype
tier2.sources.YarnRunSources.interceptors.i1.serializers.s3.name= time






tier2.sinks.HbaseAuditSink.type = hbase
tier2.sinks.HbaseAuditSink.table = audit_hbase_wc
tier2.sinks.HbaseAuditSink.columnFamily = f1
tier2.sinks.HbaseAuditSink.batchSize=1
tier2.sinks.HbaseAuditSink.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
tier2.sinks.HbaseAuditSink.serializer.regex = serverip=(.*?),datatye=(.*?),([\\d\\-\\s:,]{23}).*/(.*?);\\srequest:(.*?);.*user=(.*?),\\sscope=(.*?),.* 
tier2.sinks.HbaseAuditSink.serializer.colNames = serverip,datatype,requestdate,clientip,operation,requestuser,accessdatafile
tier2.sinks.HbaseAuditSink.channel = HbaseAuditChannel


tier2.sinks.HbaseRunSink.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
tier2.sinks.HbaseRunSink.hostNames = *.*.*.*:9300
tier2.sinks.HbaseRunSink.indexName = run_hbase_wc
tier2.sinks.HbaseRunSink.clusterName = fe8734cb-8e5d-476a-9aa6-19ee459e15a6
tier2.sinks.HbaseRunSink.batchSize = 1
tier2.sinks.HbaseRunSink.channel = HbaseRunChannel




tier2.sinks.HdfsAuditSink.type = hbase
tier2.sinks.HdfsAuditSink.table = audit_hdfs_wc
tier2.sinks.HdfsAuditSink.columnFamily = f1
tier2.sinks.HdfsAuditSink.batchSize=1
tier2.sinks.HdfsAuditSink.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
tier2.sinks.HdfsAuditSink.serializer.regex = serverip=(.*?),datatype=(.*?),([\\d\\-\\s:,]{23})\\s*.*:\\sallowed=(.*?)ugi=(.*?)\\s.*?\\)ip=/(.*?)cmd=(.*?)src=(.*?)dst=(.*?)perm=(.*?)proto=(.*)
tier2.sinks.HdfsAuditSink.serializer.colNames = serverip,datatype,requestdate,operationresult,requestuser,clientip,operation,src,dst,dataowner
tier2.sinks.HdfsAuditSink.channel = HdfsAuditChannel




tier2.sinks.HdfsRunSink.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
tier2.sinks.HdfsRunSink.hostNames = *.*.*.*:9300
tier2.sinks.HdfsRunSink.indexName = run_hdfs_wc
#tier2.sinks.HdfsRunSink.indexType = message
tier2.sinks.HdfsRunSink.clusterName = fe8734cb-8e5d-476a-9aa6-19ee459e15a6
tier2.sinks.HdfsRunSink.batchSize = 1
#tier2.sinks.HdfsRunSink.ttl = 5d
tier2.sinks.HdfsRunSink.channel = HdfsRunChannel






tier2.sinks.HiveAuditSink.type = hbase
tier2.sinks.HiveAuditSink.table = audit_hive_wc
tier2.sinks.HiveAuditSink.columnFamily = f1
tier2.sinks.HiveAuditSink.batchSize=1
tier2.sinks.HiveAuditSink.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
tier2.sinks.HiveAuditSink.serializer.regex =  serverip=(.*?),datatype=(.*?),.*"serviceName":(.*?),\\s"username":(.*?),\\s"impersonator":(.*?),\\s"ipAddress":(.*?),\\s"operation":(.*?),\\s"eventTime":(.*?),\\s"operationText":(.*?),\\s"allowed":(.*?),\\s"databaseName":(.*?),\\s"tableName":(.*?),\\s"resourcePath"(.*?),\\s"objectType":(.*?)*
tier2.sinks.HiveAuditSink.serializer.colNames = serverip,datatype,requestuser,username,impersonator,clientip,operation,eventtime,operationtext,operationresult,databaseName,tableName,resourcePath,objectType
tier2.sinks.HiveAuditSink.channel = HiveAuditChannel


tier2.sinks.HiveRunSink.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
tier2.sinks.HiveRunSink.hostNames = *.*.*.*:9300
tier2.sinks.HiveRunSink.indexName = run_hive_wc
tier2.sinks.HiveRunSink.clusterName = fe8734cb-8e5d-476a-9aa6-19ee459e15a6
tier2.sinks.HiveRunSink.batchSize = 1
tier2.sinks.HiveRunSink.channel = HiveRunChannel




tier2.sinks.StormWorkerSink.type = hbase
tier2.sinks.StormWorkerSink.table = audit_storm_wc
tier2.sinks.StormWorkerSink.columnFamily = f1
tier2.sinks.StormWorkerSink.batchSize=1
tier2.sinks.StormWorkerSink.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
tier2.sinks.StormWorkerSink.serializer.regex = serverip=(.*?),datatype=(.*?),([\\d\\-\\s:.]{23})\\s.*,\\s.*,\\sattempt=(.*?)\\s.*,\\slast exception:(.*?)\\son.*
tier2.sinks.StormWorkerSink.serializer.colNames = serverip,datatype,requestdate,attempt,lastexception
tier2.sinks.StormWorkerSink.channel = StormWorkerChannel




tier2.sinks.StormRunSink.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
tier2.sinks.StormRunSink.hostNames = *.*.*.*:9300
tier2.sinks.StormRunSink.indexName = run_storm_wc
tier2.sinks.StormRunSink.clusterName = fe8734cb-8e5d-476a-9aa6-19ee459e15a6
tier2.sinks.StormRunSink.batchSize = 1
tier2.sinks.StormRunSink.channel = StormRunChannel






tier2.sinks.YarnAuditSink.type = hbase
tier2.sinks.YarnAuditSink.table = audit_yarn_wc
tier2.sinks.YarnAuditSink.columnFamily = f1
tier2.sinks.YarnAuditSink.batchSize=1
tier2.sinks.YarnAuditSink.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
tier2.sinks.YarnAuditSink.serializer.regex = serverip=(.*?),datatype=(.*?),([\\d\\-\\s:,]{23})\\s*.*:\\sUSER=(.*?)IP=(.*?)OPERATION=(.*?)TARGET=(.*?)RESULT=(.*?)APPID=(.*)
tier2.sinks.YarnAuditSink.serializer.colNames = serverip,datatype,requestdate,requestuser,clientip,operation,target,operationresult,APPID
tier2.sinks.YarnAuditSink.channel = YarnAuditChannel




tier2.sinks.YarnRunSink.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
tier2.sinks.YarnRunSink.hostNames = *.*.*.*:9300
tier2.sinks.YarnRunSink.indexName = run_yarn_wc
#tier2.sinks.YarnRunSink.indexType = message
tier2.sinks.YarnRunSink.clusterName = fe8734cb-8e5d-476a-9aa6-19ee459e15a6
tier2.sinks.YarnRunSink.batchSize = 1
#tier2.sinks.YarnRunSink.ttl = 5d
tier2.sinks.YarnRunSink.channel = YarnRunChannel








tier2.channels.HbaseAuditChannel.type = memory
tier2.channels.HbaseAuditChannel.capacity = 10000
tier2.channels.HbaseAuditChannel.transactionCapacity=1000
tier2.channels.HbaseAuditChannel.byteCapacityBufferPercentage=20


tier2.channels.HbaseRunChannel.type = memory
tier2.channels.HbaseRunChannel.capacity = 10000
tier2.channels.HbaseRunChannel.transactionCapacity=1000
tier2.channels.HbaseRunChannel.byteCapacityBufferPercentage=20




tier2.channels.HdfsAuditChannel.type = memory
tier2.channels.HdfsAuditChannel.capacity = 10000
tier2.channels.HdfsAuditChannel.transactionCapacity=1000
tier2.channels.HdfsAuditChannel.byteCapacityBufferPercentage=20


tier2.channels.HdfsRunChannel.type = memory
tier2.channels.HdfsRunChannel.capacity = 10000
tier2.channels.HdfsRunChannel.transactionCapacity=1000
tier2.channels.HdfsRunChannel.byteCapacityBufferPercentage=20




tier2.channels.HiveAuditChannel.type = memory
tier2.channels.HiveAuditChannel.capacity = 10000
tier2.channels.HiveAuditChannel.transactionCapacity=1000
tier2.channels.HiveAuditChannel.byteCapacityBufferPercentage=20


tier2.channels.HiveRunChannel.type = memory
tier2.channels.HiveRunChannel.capacity = 10000
tier2.channels.HiveRunChannel.transactionCapacity=1000
tier2.channels.HiveRunChannel.byteCapacityBufferPercentage=20




tier2.channels.StormWorkerChannel.type = memory
tier2.channels.StormWorkerChannel.capacity = 10000
tier2.channels.StormWorkerChannel.transactionCapacity=1000
tier2.channels.StormWorkerChannel.byteCapacityBufferPercentage=20


tier2.channels.StormRunChannel.type = memory
tier2.channels.StormRunChannel.capacity = 10000
tier2.channels.StormRunChannel.transactionCapacity=1000
tier2.channels.StormRunChannel.byteCapacityBufferPercentage=20




tier2.channels.YarnAuditChannel.type = memory
tier2.channels.YarnAuditChannel.capacity = 10000
tier2.channels.YarnAuditChannel.transactionCapacity=1000
tier2.channels.YarnAuditChannel.byteCapacityBufferPercentage=20


tier2.channels.YarnRunChannel.type = memory
tier2.channels.YarnRunChannel.capacity = 10000
tier2.channels.YarnRunChannel.transactionCapacity=1000
tier2.channels.YarnRunChannel.byteCapacityBufferPercentage=20

相關推薦

使用flumekafkatopic取得資料然後存入hbasees

接上一篇部落格,將資料進行處理!!!!!!!!!!!!#HBASEtier2.sources  = HbaseAuditSource HbaseRunSource HdfsAuditSources HdfsRunSources HiveAuditSources HiveRun

Python爬取天氣預報資料存入到本地EXCEL

近期忙裡偷閒,搞了幾天python爬蟲,基本可以實現常規網路資料的爬取,比如糗事百科、豆瓣影評、NBA資料、股票資料、天氣預報等的爬取,整體過程其實比較簡單,有一些HTML+CSS+DOM樹等知識就很easy,我就以天氣預報資料的爬取為例,整理出來。 需求:採

FlumeKafka讀取資料並寫入到Hdfs上

需求:kafka有五個主題  topic topic-app-startuptopic topic-app-errortopic topic-app-eventtopic topic-app-usagetopic topic-app-pageflume讀取Kafka 5個主題

Kafkatopic的PartitionKafka為什麼這麼快 Consumer的負載均衡及consumerGrou

分享一下我老師大神的人工智慧教程!零基礎,通俗易懂!http://blog.csdn.net/jiangjunshow 也歡迎大家轉載本篇文章。分享知識,造福人民,實現我們中華民族偉大復興!        

如何mysql資料庫查詢指定欄位且符合條件的資料 然後拼接成json字串最後匯出json檔案

SELECT CONCAT("{'name':'",IFNULL(Name,''),"',","'sex':'",IFNULL(Sex,''),"',","'age':'",IFNULL(Age,''),"',", "'phone':'",IFNULL(Phon

使用springmvc頁面獲取資料然後根據獲得的引數資訊進行修改如果修改的資料含有不是基本資料型別的引數。比如傳的引數有Date型別的資料需要我們進行引數型別轉換。

1.1 需求   在商品修改頁面可以修改商品的生產日期,並且根據業務需求自定義日期格式。 1.2 需求分析   由於日期資料有很多格式,所以springmvc沒辦法把字串轉換成日期型別。所以需要自定義引數繫結。前端控制器接收到請求後,找到註解形式的處理器介面卡,對RequestMapping標記的方法進

Kafkatopic的PartitionKafka為什麼這麼快,Consumer的負載均衡及consumerGroup的概念(來自學習筆記)

1.1. Kafka中topic的Partition  在Kafka檔案儲存中,同一個topic下有多個不同partition,每個partition為一個目錄,partiton命名規則為topic名稱+有序序號,第一個partiton序號從0開始,序號

JS後臺獲取資料前臺動態新增tr標籤的td標籤

功能描述: 要求從後臺查詢該省份的所有城市,然後動態的再前臺固定的tr標籤中新增相應的td標籤來展示城市基本資訊; 一、前臺jsp及js原始碼 jsp:在固定的tr標籤中新增一個

第一篇 部落格:java資料庫讀取資料並寫入到excel表格

  今天,組長分配了查詢資料庫記錄,並把這些記錄寫入到excel表格中,以前沒有嘗試過,借鑑了一些別人的程式碼,最終實現了功能,寫一篇部落格,總結一下這個過程。1.準備需要用到的jar包    1.jxl.jar     2.mysql-connector-java-5.1.

2.Kafkatopic的PartitionKafka為什麼這麼快,Consumer的負載均衡及consumerGroup的概念(來自學習筆記)

1.1. Kafka中topic的Partition  在Kafka檔案儲存中,同一個topic下有多個不同partition,每個partition為一個目錄,partiton命名規則為topic名稱+有序序號,第一個partiton序號從0開始,序號最大值為parti

一個文件讀取數據到內存然後再把內存的數據寫入另外一個文件

錯誤代碼 就會 取數據 stream off err sig where 返回值 //從一個文件中讀取數據到內存,然後再把內存中的數據寫入另外一個文件 #include "stdafx.h"#include "stdlib.h" in

arcengine獲取gdb所有的資料資料

   FileGDBWorkspaceFactoryClass fac = new FileGDBWorkspaceFactoryClass();  IWorkspace workspace = fac.OpenFromFile(pathname, 0); IE

Python刪除二維list的中文資料其中的數字相加

[[u'253', u'BSJ', u'0.00', u'0.00', u'0.00', u'0.00', u'0.00', u'0.00', u'0.00', u'0.00', u'0.000', u'\u67e5\u770b \u63d0\u73b0 \u65b0\u63d0\u

Win32資源載入PNG圖片然後建立GDI+的Image物件

void LoadPNGFromStaticRes(HMODULE hModule, UINT nResId, Image** ppImg) { HRSRC hRes = FindResource(hModule, MAKEINTRESOURCE(nResId), TEXT("PNG"))

python讀txt檔案讀資料然後修改資料再以矩陣形式儲存在檔案

import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # -*- coding: UTF-8 -*- import numpy as np import glob import tensorflow as tf flag=T

資料庫能查資料mybatis查詢為空的原因及解決方法

今日編寫專案時,發現了mybatis查詢操作時部分屬性為空值,部分屬性查詢出來了。    資料庫中存在值,也能查詢出來。 原因:mapper.xml檔案中,查詢屬性時,命名規範:查詢時的屬性必須對應java實體類中的屬性。因為我的工程師mybatis的逆向工程生成,有的

ARCEngine新增Shp資料柵格資料GDB資料

目錄 新增shp資料: 新增柵格資料: 新增GDB資料: 新增shp資料: private void 新增要素資料集ToolStripMenuItem_Click(object sender, EventArgs e) { System.

過濾器通過HttpServletResponseWrapper包裝HttpServletResponse實現獲取response的返回資料以及對資料進行gzip壓縮

前幾天我們專案總監給了我一個任務,就是將請求的介面資料進行壓縮,以達到節省流量的目的。 對於實現該功能,有以下思路: 1.獲取到response中的值, 2.對資料進行gzip壓縮(因為要求前端不變,所以只能選在這個瀏覽器都支援的壓縮方式) 3.將資料寫