Hadoop2.6.0版本號MapReudce演示樣例之WordCount(一)
阿新 • • 發佈:2017-08-16
set pat -m 代碼 分享 ont extends gravity csdn
一、準備測試數據
1、在本地Linux系統/var/lib/hadoop-hdfs/file/路徑下準備兩個文件file1.txt和file2.txt,文件列表及各自內容例如以下圖所看到的:
2、在hdfs中。準備/input路徑,並上傳兩個文件file1.txt和file2.txt。例如以下圖所看到的:
二、編寫代碼,封裝Jar包並上傳至linux
將代碼封裝成TestMapReduce.jar。並上傳至linux的/usr/local路徑下。例如以下圖所看到的:
三、執行命令
運行命令例如以下:hadoop jar /usr/local/TestMapReduce.jar com.jngreen.mapreduce.test.WordCount /input/file1.txt /input/file2.txt /output/output
命令運行過程截圖例如以下:
四、查看執行結果
查看hdfs輸出路徑/output下的結果,例如以下圖所看到的:
執行結果為Hello 4、Hadoop 1、Man 1、Boy 1、Word 1,全然正確!
五、WordCount展示
源代碼例如以下:
import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCount { // TokenizerMapper作為Map階段,須要繼承Mapper,並重寫map()函數 public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { // 用StringTokenizer作為分詞器,對value進行分詞 StringTokenizer itr = new StringTokenizer(value.toString()); // 遍歷分詞後結果 while (itr.hasMoreTokens()) { // 將String設置入Text類型word word.set(itr.nextToken()); // 將(word,1)。即(Text,IntWritable)寫入上下文context,供興許Reduce階段使用 context.write(word, one); } } } // IntSumReducer作為Reduce階段,須要繼承Reducer,並重寫reduce()函數 public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; // 遍歷map階段輸出結果中的values中每一個val,累加至sum for (IntWritable val : values) { sum += val.get(); } // 將sum設置入IntWritable類型result result.set(sum); // 通過上下文context的write()方法,輸出結果(key, result),即(Text,IntWritable) context.write(key, result); } } public static void main(String[] args) throws Exception { // 載入hadoop配置 Configuration conf = new Configuration(); // 校驗命令行輸入參數 if (args.length < 2) { System.err.println("Usage: wordcount <in> [<in>...] <out>"); System.exit(2); } // 構造一個Job實例job,並命名為"word count" Job job = new Job(conf, "word count"); // 設置jar job.setJarByClass(WordCount.class); // 設置Mapper job.setMapperClass(TokenizerMapper.class); // 設置Combiner job.setCombinerClass(IntSumReducer.class); // 設置Reducer job.setReducerClass(IntSumReducer.class); // 設置OutputKey job.setOutputKeyClass(Text.class); // 設置OutputValue job.setOutputValueClass(IntWritable.class); // 加入輸入路徑 for (int i = 0; i < args.length - 1; ++i) { FileInputFormat.addInputPath(job, new Path(args[i])); } // 加入輸出路徑 FileOutputFormat.setOutputPath(job, new Path(args[args.length - 1])); // 等待作業job執行完畢並退出 System.exit(job.waitForCompletion(true) ?0 : 1); } }
Hadoop2.6.0版本號MapReudce演示樣例之WordCount(一)