1. 程式人生 > >windows10上使用Eclipse配置Hadoop開發環境詳細步驟+WordCount示例

windows10上使用Eclipse配置Hadoop開發環境詳細步驟+WordCount示例

說明:Hadoop叢集已經搭建完畢,叢集上使用的Hadoop-2.5.0。

目的:在window10系統上利用Eclipse配置Hadoop開發環境,編寫MapReduce關聯Hadoop叢集。

準備:JDK環境變數配置、Eclipse、hadoop-2.7.5.tar、hadoop-eclipse-plugin-2.7.3.jar、hadoop-common-2.7.3-bin-master.jar(hadoop-2.7.3的Hadoop不好找了,外掛使用的2.7.3版本,如要版本統一可自行下載)

一、環境搭建

第一步:JDK環境變數配置、Eclipse安裝,略;

第二步:Hadoop環境配置

把下載好的Hadoop版本解壓的本地一目錄,本人使用的是Hadoop-2.7.5。新增系統環境變數:新建變數名HADOOP_HOME,值為Hadoop的解壓路徑,如D:\hadoop-2.7.5。新增到path中:%HADOOP_HOME%\bin。

第三步:把hadoop-eclipse-plugin-2.7.3.jar包複製到Eclipse目錄下的pluguns目錄中。重啟Eclipse。開啟Eclipse->Prefences。可以看到左側多出了Hadoop Map/Reduce項。


點選多出的Hadoop Map/Reduce項,新增Hadoop解壓路徑


第四步:解壓hadoop-common-2.7.3-bin-master.7z包,把解壓得到的bin目錄下的hadoop.dll、hadoop.exp、hadoop.lib、winutils.exe等所有檔案複製到Hadoop-2.7.5的bin目錄下。再把hadoop.dll複製到C:\Windows\System32目錄下。

第五步:Eclipse中依次點選:Window->Open Perspective->Map/Reduce,專案結構中出現DFS Locations結構。


第六步:Eclipse中依次點選:Window->Show View ->Other->MapReduce Tools->Map/Reduce Locations。確定(open)

             

下面的控制檯多出了Map/Reduce Locations試圖。右鍵Map/Reduce Locations試圖的空白處,選擇新建,定義Hadoop叢集的連結。Location name任起,Host填寫Hadoop的mater的IP地址,port是對應的埠號,這個要與叢集上core-site.xml檔案中的引數一致,確保能連到叢集,User name任起。


core-site.xml的位置根據自己的情況確定,我的在/etc/hadoop/2.5.0.0-1245/0/下,檢視方式是:cat /etc/hadoop/2.5.0.0-1245/0/core-site.xml。


填寫好以上引數後點擊Finish。DFS Locations下出現定義的Hadoop連線資訊。點開節點會看到叢集上的檔案資訊。看不到這連線失敗,檢查上步IP地址及埠的配置是否有誤。


若有檔案的話,點選其中的某節點中的檔案確定能檢視檔案內容。


此時若不能檢視檔案內容,若提示是editor could not be initialized. org.eclipse.ui.workbench.texteditor類似的問題,則可能是C:\Windows\System32下的hadoop.dll版本和hadoop-2.7.5/bin下的hadoop.dll版本不一致的原因。

至此,window下Eclipse配置Hadoop開發環境搭建完畢。

二、WordCount示例

第一步、新建專案 :File->new->other->Map/Reduce Project

第二步、src下建立Package,Package下建立WordCount.java類

程式碼如下(可直接複製貼上到你的WordCount類):

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;


public class WordCount {

    public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                word.set(itr.nextToken());
                context.write(word, one);
            }
        }
    }

    public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
        private IntWritable result = new IntWritable();

        public void reduce(Text key, Iterable<IntWritable> values, Context context)
                throws IOException, InterruptedException {
            int sum = 0;
            for (IntWritable val : values) {
                sum += val.get();
            }
            result.set(sum);
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
        if (otherArgs.length != 2) {
            System.err.println("Usage: wordcount <in> <out>");
            System.exit(2);
        }
        @SuppressWarnings("deprecation")
		Job job = new Job(conf, "word count");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(IntSumReducer.class);
        job.setReducerClass(IntSumReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}
在src下建立log4j.properties檔案,不然執行程式時候會報錯。可先建立txt文字檔案,新增內容後修改名字及字尾名,然後複製到專案下。內容如下:
# Configure logging for testing:optionally with log file 
#log4j.rootLogger=debug,appender 
log4j.rootLogger=info,appender 
#log4j.rootLogger=error,appender 
#\u8F93\u51FA\u5230\u63A7\u5236\u53F0 
log4j.appender.appender=org.apache.log4j.ConsoleAppender 
#\u6837\u5F0F\u4E3ATTCCLayout 
log4j.appender.appender.layout=org.apache.log4j.TTCCLayout
第三步:右鍵專案,依次Run as ->Run Configurations...->Java Application。選Java Application後點擊左上角的New launch application,配置Main標籤引數。填寫Name(任起),Search...往下拉,找到WordCount,確定。


配置Argument標籤引數。注意點在圖上已經說明


配置完成後點選Apply,Run。出現類似以下日誌,成功。

[pool-6-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 115 bytes
[pool-6-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merged 1 segments, 119 bytes to disk to satisfy reduce memory limit
[pool-6-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 1 files, 123 bytes from disk
[pool-6-thread-1] INFO org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl - Merging 0 segments, 0 bytes from memory into reduce
[pool-6-thread-1] INFO org.apache.hadoop.mapred.Merger - Merging 1 sorted segments
[pool-6-thread-1] INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 1 segments left of total size: 115 bytes
[pool-6-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
[pool-6-thread-1] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
[pool-6-thread-1] INFO org.apache.hadoop.mapred.Task - Task:attempt_local1399589841_0001_r_000000_0 is done. And is in the process of committing
[pool-6-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - 1 / 1 copied.
[pool-6-thread-1] INFO org.apache.hadoop.mapred.Task - Task attempt_local1399589841_0001_r_000000_0 is allowed to commit now
[pool-6-thread-1] INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task 'attempt_local1399589841_0001_r_000000_0' to hdfs://192.168.200.240:8020/user/tws/test/_temporary/0/task_local1399589841_0001_r_000000
[pool-6-thread-1] INFO org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce
[pool-6-thread-1] INFO org.apache.hadoop.mapred.Task - Task 'attempt_local1399589841_0001_r_000000_0' done.
[pool-6-thread-1] INFO org.apache.hadoop.mapred.Task - Final Counters for attempt_local1399589841_0001_r_000000_0: Counters: 29
	File System Counters
		FILE: Number of bytes read=453
		FILE: Number of bytes written=292149
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=87
		HDFS: Number of bytes written=81
		HDFS: Number of read operations=8
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=3
	Map-Reduce Framework
		Combine input records=0
		Combine output records=0
		Reduce input groups=9
		Reduce shuffle bytes=123
		Reduce input records=9
		Reduce output records=9
		Spilled Records=9
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=0
		Total committed heap usage (bytes)=253231104
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Output Format Counters 
		Bytes Written=81
此時可以在Argument指定的路徑(即輸出路徑,hdfs://IP:埠/路徑)下檢視結果。
word.txt內容如下:


執行結果: