1. 程式人生 > >大數據技術之壓縮解壓縮案例

大數據技術之壓縮解壓縮案例

except 通過 eth rom tde ado 方式 函數 lib

7.10 壓縮/解壓縮案例

7.10.1 對數據流的壓縮和解壓縮

CompressionCodec有兩個方法可以用於輕松地壓縮或解壓縮數據。要想對正在被寫入一個輸出流的數據進行壓縮,我們可以使用createOutputStream(OutputStreamout)方法創建一個CompressionOutputStream,將其以壓縮格式寫入底層的流。相反,要想對從輸入流讀取而來的數據進行解壓縮,則調用createInputStream(InputStreamin)函數,從而獲得一個CompressionInputStream,從而從底層的流讀取未壓縮的數據。

測試一下如下壓縮方式

DEFLATE

org.apache.hadoop.io.compress.DefaultCodec

gzip

org.apache.hadoop.io.compress.GzipCodec

bzip2

org.apache.hadoop.io.compress.BZip2Codec

package com.xyg.mapreduce.compress;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.CompressionCodecFactory;
import org.apache.hadoop.io.compress.CompressionOutputStream;
import org.apache.hadoop.util.ReflectionUtils;

public class TestCompress { public static void main(String[] args) throws Exception, IOException { // compress("e:/test.txt","org.apache.hadoop.io.compress.BZip2Codec"); decompres("e:/test.txt.bz2"); } /* * 壓縮 * filername:要壓縮文件的路徑 * method:欲使用的壓縮的方法(org.apache.hadoop.io.compress.BZip2Codec)
*/ public static void compress(String filername, String method) throws ClassNotFoundException, IOException { // 1 創建壓縮文件路徑的輸入流 File fileIn = new File(filername); InputStream in = new FileInputStream(fileIn); // 2 獲取壓縮的方式的類 Class codecClass = Class.forName(method); Configuration conf = new Configuration(); // 3 通過名稱找到對應的編碼/解碼器 CompressionCodec codec = (CompressionCodec) ReflectionUtils.newInstance(codecClass, conf); // 4 該壓縮方法對應的文件擴展名 File fileOut = new File(filername + codec.getDefaultExtension()); OutputStream out = new FileOutputStream(fileOut); CompressionOutputStream cout = codec.createOutputStream(out); // 5 流對接 IOUtils.copyBytes(in, cout, 1024 * 1024 * 5, false); // 緩沖區設為5MB // 6 關閉資源 in.close(); cout.close(); out.close(); } /* * 解壓縮 * filename:希望解壓的文件路徑 */ public static void decompres(String filename) throws FileNotFoundException, IOException { Configuration conf = new Configuration(); CompressionCodecFactory factory = new CompressionCodecFactory(conf); // 1 獲取文件的壓縮方法 CompressionCodec codec = factory.getCodec(new Path(filename)); // 2 判斷該壓縮方法是否存在 if (null == codec) { System.out.println("Cannot find codec for file " + filename); return; } // 3 創建壓縮文件的輸入流 InputStream cin = codec.createInputStream(new FileInputStream(filename)); // 4 創建解壓縮文件的輸出流 File fout = new File(filename + ".decoded"); OutputStream out = new FileOutputStream(fout); // 5 流對接 IOUtils.copyBytes(cin, out, 1024 * 1024 * 5, false); // 6 關閉資源 cin.close(); out.close(); } }

7.10.2 Map輸出端采用壓縮

即使你的MapReduce的輸入輸出文件都是未壓縮的文件,你仍然可以對map任務的中間結果輸出做壓縮,因為它要寫在硬盤並且通過網絡傳輸到reduce節點,對其壓縮可以提高很多性能,這些工作只要設置兩個屬性即可,我們來看下代碼怎麽設置:

給大家提供的hadoop源碼支持的壓縮格式有:BZip2Codec 、DefaultCodec

package com.xyg.mapreduce.compress;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.compress.BZip2Codec;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.GzipCodec;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCountDriver {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {

        Configuration configuration = new Configuration();

        // 開啟map端輸出壓縮
        configuration.setBoolean("mapreduce.map.output.compress", true);
        // 設置map端輸出壓縮方式
        configuration.setClass("mapreduce.map.output.compress.codec", BZip2Codec.class, CompressionCodec.class);

        Job job = Job.getInstance(configuration);

        job.setJarByClass(WordCountDriver.class);

        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);

        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);

        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        boolean result = job.waitForCompletion(true);

        System.exit(result ? 1 : 0);
    }
}

2Mapper保持不變

package com.xyg.mapreduce.compress;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
    
    @Override
    protected void map(LongWritable key, Text value, Context context)
            throws IOException, InterruptedException {
        
        String line = value.toString();
        
        String[] words = line.split(" ");
        
        for(String word:words){
            context.write(new Text(word), new IntWritable(1));
        }
    }
}

3)Reducer保持不變

package com.xyg.mapreduce.compress;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable>{
    
    @Override
    protected void reduce(Text key, Iterable<IntWritable> values,
            Context context) throws IOException, InterruptedException {
        
        int count = 0;
        
        for(IntWritable value:values){
            count += value.get();
        }
        
        context.write(key, new IntWritable(count));
    }
}

7.10.3 Reduce輸出端采用壓縮

基於workcount案例處理

1修改驅動

package com.xyg.mapreduce.compress;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.compress.BZip2Codec;
import org.apache.hadoop.io.compress.DefaultCodec;
import org.apache.hadoop.io.compress.GzipCodec;
import org.apache.hadoop.io.compress.Lz4Codec;
import org.apache.hadoop.io.compress.SnappyCodec;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCountDriver {

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        
        Configuration configuration = new Configuration();
        
        Job job = Job.getInstance(configuration);
        
        job.setJarByClass(WordCountDriver.class);
        
        job.setMapperClass(WordCountMapper.class);
        job.setReducerClass(WordCountReducer.class);
        
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        
        // 設置reduce端輸出壓縮開啟
        FileOutputFormat.setCompressOutput(job, true);
        
        // 設置壓縮的方式
        FileOutputFormat.setOutputCompressorClass(job, BZip2Codec.class); 
//        FileOutputFormat.setOutputCompressorClass(job, GzipCodec.class); 
//        FileOutputFormat.setOutputCompressorClass(job, DefaultCodec.class); 
        
        boolean result = job.waitForCompletion(true);
        
        System.exit(result?1:0);
    }
}

2MapperReducer保持不變(詳見7.10.2)

大數據技術之壓縮解壓縮案例