1. 程式人生 > >MapReduce實戰之WordCount

MapReduce實戰之WordCount

開啟eclipse,新建一個WordCount的java project工程,寫WordMapper類繼承於Mapper抽象類,覆寫map函式,寫WordReducer類繼承於Reducer,覆寫reduce函式,最後寫一個場景呼叫類,呼叫WordMapper和Reducer類

這裡寫圖片描述

WordMapper類

這裡寫圖片描述

WordReduce類

這裡寫圖片描述

WordMain類

這裡寫圖片描述
這裡寫圖片描述

接下來就是匯出jar包檔案的步驟

這裡寫圖片描述

這裡寫圖片描述

這裡寫圖片描述

這裡寫圖片描述

這裡寫圖片描述

Linux桌面上就會出現wordcount.jar檔案

這裡寫圖片描述

準備好要測試的檔案file1.txt,file2.txt,兩個檔案裡面的內容是一些單詞

這裡寫圖片描述

啟動hadoop叢集,命令start-all.sh,建立檔案輸入路徑:hadoop fs -mkdir /user/gznc/input將本地上的file1.txt和file2.txt檔案上傳到叢集的輸入檔案中,有兩種方法可以上傳檔案,第一種方法是命令:hdfs dfs /home/gznc/file1.txt /user/gznc/input,第一個是本地路徑,第二個是叢集路徑。第二個檔案類似。第二種方法是用eclipse寫方法上傳。注意路徑可以變化,不一定要和我的一樣

這裡寫圖片描述

檢視檔案file1.txt和file2.txt是否已經上傳到叢集,命令:hadoop fs -ls /user/gznc/input

這裡寫圖片描述

最後一步,運算,輸入命令:hadoop jar /home/gznc/Desktop/wordcount.jar wordcount.WordMain /user/gznc/input /user/gznc/output 格式:hadoop+jar+jar包的路徑,因為我是放在本地的桌面上+工程下的包名.main函式所在的那個類+叢集上所存放檔案的路徑+結果輸出的路徑

成功後的結果:
16/10/19 16:30:11 INFO input.FileInputFormat: Total input paths to process : 2
16/10/19 16:30:11 INFO mapreduce.JobSubmitter: number of splits:2
16/10/19 16:30:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1476858901736_0002
16/10/19 16:30:13 INFO impl.YarnClientImpl: Submitted application application_1476858901736_0002
16/10/19 16:30:13 INFO mapreduce.Job: The url to track the job:

http://master:18088/proxy/application_1476858901736_0002/
16/10/19 16:30:13 INFO mapreduce.Job: Running job: job_1476858901736_0002
16/10/19 16:30:30 INFO mapreduce.Job: Job job_1476858901736_0002 running in uber mode : false
16/10/19 16:30:30 INFO mapreduce.Job: map 0% reduce 0%
16/10/19 16:30:53 INFO mapreduce.Job: map 100% reduce 0%
16/10/19 16:31:15 INFO mapreduce.Job: map 100% reduce 100%
16/10/19 16:31:16 INFO mapreduce.Job: Job job_1476858901736_0002 completed successfully
16/10/19 16:31:16 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=126
FILE: Number of bytes written=290714
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=304
HDFS: Number of bytes written=40
HDFS: Number of read operations=9
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=2
Launched reduce tasks=1
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=43849
Total time spent by all reduces in occupied slots (ms)=18909
Total time spent by all map tasks (ms)=43849
Total time spent by all reduce tasks (ms)=18909
Total vcore-seconds taken by all map tasks=43849
Total vcore-seconds taken by all reduce tasks=18909
Total megabyte-seconds taken by all map tasks=44901376
Total megabyte-seconds taken by all reduce tasks=19362816
Map-Reduce Framework
Map input records=4
Map output records=16
Map output bytes=150
Map output materialized bytes=132
Input split bytes=218
Combine input records=16
Combine output records=10
Reduce input groups=5
Reduce shuffle bytes=132
Reduce input records=10
Reduce output records=5
Spilled Records=20
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=480
CPU time spent (ms)=3950
Physical memory (bytes) snapshot=510222336
Virtual memory (bytes) snapshot=2516795392
Total committed heap usage (bytes)=256647168
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=86
File Output Format Counters
Bytes Written=40

如果結果路徑如:/user/gznc/output在叢集上已經存在了,就會報如下錯誤,所以要保證結果輸出的路徑不一樣

Exception in thread “main” org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://master:9000/user/gznc/output already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
at org.apache.hadoop.mapreduce.Job10.run(Job.java:1285)atorg.apache.hadoop.mapreduce.Job10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at wordcount.WordMain.main(WordMain.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

現在我們去檢視叢集所統計單詞頻數的結果,命令:hadoop fs -ls /user/gznc/output

這裡寫圖片描述