1. 程式人生 > >Hadoop學習筆記(二)設定單節點叢集

Hadoop學習筆記(二)設定單節點叢集

描述如何設定一個單一節點的 Hadoop安裝以便可以快速執行簡單操作使用HadoopMapReduceHadoop分散式檔案系統(HDFS)。

Hadoop版本:Apache Hadoop 2.5.1

系統版本:CentOS 6.5,核心(uname -r):2.6.32-431.el6.x86_64

系統必備元件

支援的系統平臺

GNU/Linux作為開發生產平臺,毫無疑問。Windows受支援平臺但是以下步驟用於Linux。

依賴的軟體

在Linux系統上安裝所需要的軟體包

1、JAVA(JDK)必須安裝,推薦的版本請參考Hadoop JAVA Version,我這裡安裝的是1.7。

2、ssh必須安裝必須執行sshd才能使用管理遠端Hadoop守護程式Hadoop指令碼

安裝依賴的軟體

如果系統沒有軟體需要安裝

例如在Ubuntu Linux上使用以下命令:

  $ sudo apt-get install ssh
  $ sudo apt-get install rsync

CentOS應該是即使是最小安裝也帶了ssh(Secure Shell),剛開始我給弄混了,以為是JAVA的SSH(Spring + Struts +Hibernate),汗!尷尬

下載

準備啟動 Hadoop 叢集

解壓檔案hadoop-2.5.1.tar.gz,執行:tar xvf hadoop-2.5.1.tar.gz,會將檔案解壓到hadoop-2.5.1目錄下;

切換目錄:cd hadoop-2.5.1/etc/hadoop/

編輯“hadoop-env.sh”檔案,新增參考定義;

vi hadoop-env.sh

個人覺得比較好的習慣是編輯檔案之前先做個備份(cp hadoop-env.sh hadoop-env.sh.bak);

找到以下位置:

# The java implementation to use.
export JAVA_HOME={JAVA_HOME}
將其改為:
# The java implementation to use.
export JAVA_HOME=/usr/java/latest
在下面再新增一句:
# Assuming your installation directory is /usr/local/hadoop
export HADOOP_PREFIX=/usr/local/hadoop
儲存並退出,ESC,:wq

切換目錄(cd ../..),返回“/opt/hadoop-2.5.1”;

嘗試執行以下命令:

 ./bin/hadoop

這將顯示 hadoop 指令碼的使用文件,輸出如下:

Usage: hadoop [--config confdir] COMMAND
       where COMMAND is one of:
  fs                   run a generic filesystem user client
  version              print the version
  jar <jar>            run a jar file
  checknative [-a|-h]  check native hadoop and compression libraries availability
  distcp <srcurl> <desturl> copy file or directories recursively
  archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
  classpath            prints the class path needed to get the
                       Hadoop jar and the required libraries
  daemonlog            get/set the log level for each daemon
 or
  CLASSNAME            run the class named CLASSNAME

Most commands print help when invoked w/o parameters.
你現在準備好開始您的 Hadoop 叢集三個受支援的模式之一:
  • 本地 (獨立) 模式
  • 偽分佈的模式
  • 完全分散式模式

本地模式操作方法

預設情況下,Hadoop 被配置為執行在非分散式模式下,作為一個單一的 Java 程序。這比較適合用於除錯。
下面的示例複製要使用作為輸入的解壓縮的 conf 目錄,然後查詢並顯示給定正則表示式的每一場比賽。輸出被寫入給定的輸出目錄。

  $ mkdir input
  $ cp etc/hadoop/*.xml input
  $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar grep input output 'dfs[a-z.]+'
  $ cat output/*

執行“bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar grep input output 'dfs[a-z.]+'”時

卻出現錯誤:Error: Could not find or load main class org.apache.hadoop.util.RunJar

此問題只在Stack Overflow上見到

但是也沒能找到解決的辦法;還是自己摸索吧!

解決步驟:

剛剛備份的hadoop-env.sh”檔案現在用上了,還原它。

再執行“bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar grep input output 'dfs[a-z.]+'”,

提示:

./bin/hadoop: line 133: /usr/java/jdk1.7.0/bin/java: No such file or directory
./bin/hadoop: line 133: exec: /usr/java/jdk1.7.0/bin/java: cannot execute: No such file or directory
按提示應該還是JAVA(JDK)的安裝的問題,我安裝JDK的時候只執行到
rpm -ivh /目錄/jdk-7-linux-x64.rpm

再沒執行其它操作,將後續的步驟執行完成後,再執行“bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar grep input output 'dfs[a-z.]+'”,

輸出:

14/10/07 03:35:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/10/07 03:35:58 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/10/07 03:35:58 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/10/07 03:35:59 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
14/10/07 03:35:59 INFO input.FileInputFormat: Total input paths to process : 6
14/10/07 03:35:59 INFO mapreduce.JobSubmitter: number of splits:6
14/10/07 03:36:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1185570365_0001
14/10/07 03:36:00 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/staging/root1185570365/.staging/job_local1185570365_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/10/07 03:36:01 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/staging/root1185570365/.staging/job_local1185570365_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/10/07 03:36:01 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local1185570365_0001/job_local1185570365_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/10/07 03:36:01 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local1185570365_0001/job_local1185570365_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/10/07 03:36:01 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
14/10/07 03:36:01 INFO mapreduce.Job: Running job: job_local1185570365_0001
14/10/07 03:36:01 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/10/07 03:36:01 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/10/07 03:36:02 INFO mapred.LocalJobRunner: Waiting for map tasks
14/10/07 03:36:02 INFO mapred.LocalJobRunner: Starting task: attempt_local1185570365_0001_m_000000_0
14/10/07 03:36:02 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/07 03:36:02 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/hadoop-policy.xml:0+9201
14/10/07 03:36:02 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/07 03:36:02 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/07 03:36:02 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/07 03:36:02 INFO mapred.MapTask: soft limit at 83886080
14/10/07 03:36:02 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/07 03:36:02 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/07 03:36:02 INFO mapred.LocalJobRunner: 
14/10/07 03:36:02 INFO mapred.MapTask: Starting flush of map output
14/10/07 03:36:02 INFO mapred.MapTask: Spilling map output
14/10/07 03:36:02 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
14/10/07 03:36:02 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
14/10/07 03:36:02 INFO mapreduce.Job: Job job_local1185570365_0001 running in uber mode : false
14/10/07 03:36:02 INFO mapred.MapTask: Finished spill 0
14/10/07 03:36:02 INFO mapreduce.Job:  map 0% reduce 0%
14/10/07 03:36:02 INFO mapred.Task: Task:attempt_local1185570365_0001_m_000000_0 is done. And is in the process of committing
14/10/07 03:36:02 INFO mapred.LocalJobRunner: map
14/10/07 03:36:02 INFO mapred.Task: Task 'attempt_local1185570365_0001_m_000000_0' done.
14/10/07 03:36:02 INFO mapred.LocalJobRunner: Finishing task: attempt_local1185570365_0001_m_000000_0
14/10/07 03:36:02 INFO mapred.LocalJobRunner: Starting task: attempt_local1185570365_0001_m_000001_0
14/10/07 03:36:02 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/07 03:36:02 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/capacity-scheduler.xml:0+3589
14/10/07 03:36:02 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/07 03:36:02 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/07 03:36:02 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/07 03:36:02 INFO mapred.MapTask: soft limit at 83886080
14/10/07 03:36:02 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/07 03:36:02 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/07 03:36:02 INFO mapred.LocalJobRunner: 
14/10/07 03:36:02 INFO mapred.MapTask: Starting flush of map output
14/10/07 03:36:02 INFO mapred.Task: Task:attempt_local1185570365_0001_m_000001_0 is done. And is in the process of committing
14/10/07 03:36:02 INFO mapred.LocalJobRunner: map
14/10/07 03:36:02 INFO mapred.Task: Task 'attempt_local1185570365_0001_m_000001_0' done.
14/10/07 03:36:02 INFO mapred.LocalJobRunner: Finishing task: attempt_local1185570365_0001_m_000001_0
14/10/07 03:36:02 INFO mapred.LocalJobRunner: Starting task: attempt_local1185570365_0001_m_000002_0
14/10/07 03:36:02 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/07 03:36:02 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/hdfs-site.xml:0+775
14/10/07 03:36:02 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/07 03:36:03 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/07 03:36:03 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/07 03:36:03 INFO mapred.MapTask: soft limit at 83886080
14/10/07 03:36:03 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/07 03:36:03 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/07 03:36:03 INFO mapred.LocalJobRunner: 
14/10/07 03:36:03 INFO mapred.MapTask: Starting flush of map output
14/10/07 03:36:03 INFO mapred.Task: Task:attempt_local1185570365_0001_m_000002_0 is done. And is in the process of committing
14/10/07 03:36:03 INFO mapred.LocalJobRunner: map
14/10/07 03:36:03 INFO mapred.Task: Task 'attempt_local1185570365_0001_m_000002_0' done.
14/10/07 03:36:03 INFO mapred.LocalJobRunner: Finishing task: attempt_local1185570365_0001_m_000002_0
14/10/07 03:36:03 INFO mapred.LocalJobRunner: Starting task: attempt_local1185570365_0001_m_000003_0
14/10/07 03:36:03 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/07 03:36:03 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/core-site.xml:0+774
14/10/07 03:36:03 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/07 03:36:03 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/07 03:36:03 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/07 03:36:03 INFO mapred.MapTask: soft limit at 83886080
14/10/07 03:36:03 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/07 03:36:03 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/07 03:36:03 INFO mapred.LocalJobRunner: 
14/10/07 03:36:03 INFO mapred.MapTask: Starting flush of map output
14/10/07 03:36:03 INFO mapred.Task: Task:attempt_local1185570365_0001_m_000003_0 is done. And is in the process of committing
14/10/07 03:36:03 INFO mapred.LocalJobRunner: map
14/10/07 03:36:03 INFO mapred.Task: Task 'attempt_local1185570365_0001_m_000003_0' done.
14/10/07 03:36:03 INFO mapred.LocalJobRunner: Finishing task: attempt_local1185570365_0001_m_000003_0
14/10/07 03:36:03 INFO mapred.LocalJobRunner: Starting task: attempt_local1185570365_0001_m_000004_0
14/10/07 03:36:03 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/07 03:36:03 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/yarn-site.xml:0+690
14/10/07 03:36:03 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/07 03:36:03 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/07 03:36:03 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/07 03:36:03 INFO mapred.MapTask: soft limit at 83886080
14/10/07 03:36:03 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/07 03:36:03 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/07 03:36:03 INFO mapred.LocalJobRunner: 
14/10/07 03:36:03 INFO mapred.MapTask: Starting flush of map output
14/10/07 03:36:03 INFO mapred.Task: Task:attempt_local1185570365_0001_m_000004_0 is done. And is in the process of committing
14/10/07 03:36:03 INFO mapred.LocalJobRunner: map
14/10/07 03:36:03 INFO mapred.Task: Task 'attempt_local1185570365_0001_m_000004_0' done.
14/10/07 03:36:03 INFO mapred.LocalJobRunner: Finishing task: attempt_local1185570365_0001_m_000004_0
14/10/07 03:36:03 INFO mapred.LocalJobRunner: Starting task: attempt_local1185570365_0001_m_000005_0
14/10/07 03:36:03 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/07 03:36:03 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/httpfs-site.xml:0+620
14/10/07 03:36:03 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/07 03:36:03 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/07 03:36:03 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/07 03:36:03 INFO mapred.MapTask: soft limit at 83886080
14/10/07 03:36:03 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/07 03:36:03 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/07 03:36:03 INFO mapred.LocalJobRunner: 
14/10/07 03:36:03 INFO mapred.MapTask: Starting flush of map output
14/10/07 03:36:03 INFO mapred.Task: Task:attempt_local1185570365_0001_m_000005_0 is done. And is in the process of committing
14/10/07 03:36:03 INFO mapred.LocalJobRunner: map
14/10/07 03:36:03 INFO mapred.Task: Task 'attempt_local1185570365_0001_m_000005_0' done.
14/10/07 03:36:03 INFO mapred.LocalJobRunner: Finishing task: attempt_local1185570365_0001_m_000005_0
14/10/07 03:36:03 INFO mapred.LocalJobRunner: map task executor complete.
14/10/07 03:36:03 INFO mapred.LocalJobRunner: Waiting for reduce tasks
14/10/07 03:36:03 INFO mapred.LocalJobRunner: Starting task: attempt_local1185570365_0001_r_000000_0
14/10/07 03:36:03 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/07 03:36:03 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: [email protected]
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=363285696, maxSingleShuffleLimit=90821424, mergeThreshold=239768576, ioSortFactor=10, memToMemMergeOutputsThreshold=10
14/10/07 03:36:03 INFO reduce.EventFetcher: attempt_local1185570365_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
14/10/07 03:36:03 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1185570365_0001_m_000001_0 decomp: 2 len: 6 to MEMORY
14/10/07 03:36:03 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1185570365_0001_m_000001_0
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->2
14/10/07 03:36:03 INFO mapreduce.Job:  map 100% reduce 0%
14/10/07 03:36:03 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1185570365_0001_m_000004_0 decomp: 2 len: 6 to MEMORY
14/10/07 03:36:03 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1185570365_0001_m_000004_0
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 2, commitMemory -> 2, usedMemory ->4
14/10/07 03:36:03 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1185570365_0001_m_000005_0 decomp: 2 len: 6 to MEMORY
14/10/07 03:36:03 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1185570365_0001_m_000005_0
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 3, commitMemory -> 4, usedMemory ->6
14/10/07 03:36:03 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1185570365_0001_m_000002_0 decomp: 2 len: 6 to MEMORY
14/10/07 03:36:03 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1185570365_0001_m_000002_0
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 4, commitMemory -> 6, usedMemory ->8
14/10/07 03:36:03 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1185570365_0001_m_000003_0 decomp: 2 len: 6 to MEMORY
14/10/07 03:36:03 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local1185570365_0001_m_000003_0
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 5, commitMemory -> 8, usedMemory ->10
14/10/07 03:36:03 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1185570365_0001_m_000000_0 decomp: 21 len: 25 to MEMORY
14/10/07 03:36:03 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local1185570365_0001_m_000000_0
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 6, commitMemory -> 10, usedMemory ->31
14/10/07 03:36:03 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
14/10/07 03:36:03 INFO mapred.LocalJobRunner: 6 / 6 copied.
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: finalMerge called with 6 in-memory map-outputs and 0 on-disk map-outputs
14/10/07 03:36:03 INFO mapred.Merger: Merging 6 sorted segments
14/10/07 03:36:03 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: Merged 6 segments, 31 bytes to disk to satisfy reduce memory limit
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: Merging 1 files, 25 bytes from disk
14/10/07 03:36:03 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
14/10/07 03:36:03 INFO mapred.Merger: Merging 1 sorted segments
14/10/07 03:36:03 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
14/10/07 03:36:03 INFO mapred.LocalJobRunner: 6 / 6 copied.
14/10/07 03:36:04 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
14/10/07 03:36:04 INFO mapred.Task: Task:attempt_local1185570365_0001_r_000000_0 is done. And is in the process of committing
14/10/07 03:36:04 INFO mapred.LocalJobRunner: 6 / 6 copied.
14/10/07 03:36:04 INFO mapred.Task: Task attempt_local1185570365_0001_r_000000_0 is allowed to commit now
14/10/07 03:36:04 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1185570365_0001_r_000000_0' to file:/opt/hadoop-2.5.1/grep-temp-767563685/_temporary/0/task_local1185570365_0001_r_000000
14/10/07 03:36:04 INFO mapred.LocalJobRunner: reduce > reduce
14/10/07 03:36:04 INFO mapred.Task: Task 'attempt_local1185570365_0001_r_000000_0' done.
14/10/07 03:36:04 INFO mapred.LocalJobRunner: Finishing task: attempt_local1185570365_0001_r_000000_0
14/10/07 03:36:04 INFO mapred.LocalJobRunner: reduce task executor complete.
14/10/07 03:36:04 INFO mapreduce.Job:  map 100% reduce 100%
14/10/07 03:36:04 INFO mapreduce.Job: Job job_local1185570365_0001 completed successfully
14/10/07 03:36:04 INFO mapreduce.Job: Counters: 33
	File System Counters
		FILE: Number of bytes read=114663
		FILE: Number of bytes written=1613316
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
	Map-Reduce Framework
		Map input records=405
		Map output records=1
		Map output bytes=17
		Map output materialized bytes=55
		Input split bytes=657
		Combine input records=1
		Combine output records=1
		Reduce input groups=1
		Reduce shuffle bytes=55
		Reduce input records=1
		Reduce output records=1
		Spilled Records=2
		Shuffled Maps =6
		Failed Shuffles=0
		Merged Map outputs=6
		GC time elapsed (ms)=225
		CPU time spent (ms)=0
		Physical memory (bytes) snapshot=0
		Virtual memory (bytes) snapshot=0
		Total committed heap usage (bytes)=1106100224
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=15649
	File Output Format Counters 
		Bytes Written=123
14/10/07 03:36:04 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory file:/opt/hadoop-2.5.1/output already exists
	at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
	at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
	at org.apache.hadoop.examples.Grep.run(Grep.java:92)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.examples.Grep.main(Grep.java:101)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)
	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

Output directory file:/opt/hadoop-2.5.1/output already exists,噢,原因是output目錄已經存在了(之前我排查問題的時候建立的);

刪除output目錄(rm -rf output);

再執行“bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.1.jar grep input output 'dfs[a-z.]+'”命令,輸出如下:

14/10/08 05:57:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/10/08 05:57:35 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/10/08 05:57:35 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/10/08 05:57:36 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
14/10/08 05:57:36 INFO input.FileInputFormat: Total input paths to process : 6
14/10/08 05:57:36 INFO mapreduce.JobSubmitter: number of splits:6
14/10/08 05:57:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local380762736_0001
14/10/08 05:57:37 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/staging/root380762736/.staging/job_local380762736_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/10/08 05:57:37 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/staging/root380762736/.staging/job_local380762736_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/10/08 05:57:38 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local380762736_0001/job_local380762736_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/10/08 05:57:38 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local380762736_0001/job_local380762736_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/10/08 05:57:38 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
14/10/08 05:57:38 INFO mapreduce.Job: Running job: job_local380762736_0001
14/10/08 05:57:38 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/10/08 05:57:38 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/10/08 05:57:38 INFO mapred.LocalJobRunner: Waiting for map tasks
14/10/08 05:57:38 INFO mapred.LocalJobRunner: Starting task: attempt_local380762736_0001_m_000000_0
14/10/08 05:57:39 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/08 05:57:39 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/hadoop-policy.xml:0+9201
14/10/08 05:57:39 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/08 05:57:39 INFO mapreduce.Job: Job job_local380762736_0001 running in uber mode : false
14/10/08 05:57:39 INFO mapreduce.Job:  map 0% reduce 0%
14/10/08 05:57:43 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/08 05:57:43 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/08 05:57:43 INFO mapred.MapTask: soft limit at 83886080
14/10/08 05:57:43 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/08 05:57:43 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/08 05:57:44 INFO mapred.LocalJobRunner: 
14/10/08 05:57:44 INFO mapred.MapTask: Starting flush of map output
14/10/08 05:57:44 INFO mapred.MapTask: Spilling map output
14/10/08 05:57:44 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
14/10/08 05:57:44 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
14/10/08 05:57:44 INFO mapred.MapTask: Finished spill 0
14/10/08 05:57:44 INFO mapred.Task: Task:attempt_local380762736_0001_m_000000_0 is done. And is in the process of committing
14/10/08 05:57:45 INFO mapred.LocalJobRunner: map
14/10/08 05:57:45 INFO mapred.Task: Task 'attempt_local380762736_0001_m_000000_0' done.
14/10/08 05:57:45 INFO mapred.LocalJobRunner: Finishing task: attempt_local380762736_0001_m_000000_0
14/10/08 05:57:45 INFO mapred.LocalJobRunner: Starting task: attempt_local380762736_0001_m_000001_0
14/10/08 05:57:45 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/08 05:57:45 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/capacity-scheduler.xml:0+3589
14/10/08 05:57:45 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/08 05:57:45 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/08 05:57:45 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/08 05:57:45 INFO mapred.MapTask: soft limit at 83886080
14/10/08 05:57:45 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/08 05:57:45 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/08 05:57:45 INFO mapred.LocalJobRunner: 
14/10/08 05:57:45 INFO mapred.MapTask: Starting flush of map output
14/10/08 05:57:45 INFO mapred.Task: Task:attempt_local380762736_0001_m_000001_0 is done. And is in the process of committing
14/10/08 05:57:45 INFO mapred.LocalJobRunner: map
14/10/08 05:57:45 INFO mapred.Task: Task 'attempt_local380762736_0001_m_000001_0' done.
14/10/08 05:57:45 INFO mapred.LocalJobRunner: Finishing task: attempt_local380762736_0001_m_000001_0
14/10/08 05:57:45 INFO mapred.LocalJobRunner: Starting task: attempt_local380762736_0001_m_000002_0
14/10/08 05:57:45 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/08 05:57:45 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/hdfs-site.xml:0+775
14/10/08 05:57:45 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/08 05:57:46 INFO mapreduce.Job:  map 100% reduce 0%
14/10/08 05:57:46 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/08 05:57:46 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/08 05:57:46 INFO mapred.MapTask: soft limit at 83886080
14/10/08 05:57:46 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/08 05:57:46 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/08 05:57:46 INFO mapred.LocalJobRunner: 
14/10/08 05:57:46 INFO mapred.MapTask: Starting flush of map output
14/10/08 05:57:46 INFO mapred.Task: Task:attempt_local380762736_0001_m_000002_0 is done. And is in the process of committing
14/10/08 05:57:46 INFO mapred.LocalJobRunner: map
14/10/08 05:57:46 INFO mapred.Task: Task 'attempt_local380762736_0001_m_000002_0' done.
14/10/08 05:57:46 INFO mapred.LocalJobRunner: Finishing task: attempt_local380762736_0001_m_000002_0
14/10/08 05:57:46 INFO mapred.LocalJobRunner: Starting task: attempt_local380762736_0001_m_000003_0
14/10/08 05:57:46 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/08 05:57:46 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/core-site.xml:0+774
14/10/08 05:57:46 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/08 05:57:47 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/08 05:57:47 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/08 05:57:47 INFO mapred.MapTask: soft limit at 83886080
14/10/08 05:57:47 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/08 05:57:47 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/08 05:57:47 INFO mapred.LocalJobRunner: 
14/10/08 05:57:47 INFO mapred.MapTask: Starting flush of map output
14/10/08 05:57:47 INFO mapred.Task: Task:attempt_local380762736_0001_m_000003_0 is done. And is in the process of committing
14/10/08 05:57:47 INFO mapred.LocalJobRunner: map
14/10/08 05:57:47 INFO mapred.Task: Task 'attempt_local380762736_0001_m_000003_0' done.
14/10/08 05:57:47 INFO mapred.LocalJobRunner: Finishing task: attempt_local380762736_0001_m_000003_0
14/10/08 05:57:47 INFO mapred.LocalJobRunner: Starting task: attempt_local380762736_0001_m_000004_0
14/10/08 05:57:47 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/08 05:57:47 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/yarn-site.xml:0+690
14/10/08 05:57:47 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/08 05:57:49 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/08 05:57:49 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/08 05:57:49 INFO mapred.MapTask: soft limit at 83886080
14/10/08 05:57:49 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/08 05:57:49 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/08 05:57:49 INFO mapred.LocalJobRunner: 
14/10/08 05:57:49 INFO mapred.MapTask: Starting flush of map output
14/10/08 05:57:49 INFO mapred.Task: Task:attempt_local380762736_0001_m_000004_0 is done. And is in the process of committing
14/10/08 05:57:49 INFO mapred.LocalJobRunner: map
14/10/08 05:57:49 INFO mapred.Task: Task 'attempt_local380762736_0001_m_000004_0' done.
14/10/08 05:57:49 INFO mapred.LocalJobRunner: Finishing task: attempt_local380762736_0001_m_000004_0
14/10/08 05:57:49 INFO mapred.LocalJobRunner: Starting task: attempt_local380762736_0001_m_000005_0
14/10/08 05:57:49 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/08 05:57:49 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/input/httpfs-site.xml:0+620
14/10/08 05:57:49 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/10/08 05:57:49 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/10/08 05:57:49 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/10/08 05:57:49 INFO mapred.MapTask: soft limit at 83886080
14/10/08 05:57:49 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/10/08 05:57:49 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/10/08 05:57:49 INFO mapred.LocalJobRunner: 
14/10/08 05:57:49 INFO mapred.MapTask: Starting flush of map output
14/10/08 05:57:49 INFO mapred.Task: Task:attempt_local380762736_0001_m_000005_0 is done. And is in the process of committing
14/10/08 05:57:49 INFO mapred.LocalJobRunner: map
14/10/08 05:57:49 INFO mapred.Task: Task 'attempt_local380762736_0001_m_000005_0' done.
14/10/08 05:57:49 INFO mapred.LocalJobRunner: Finishing task: attempt_local380762736_0001_m_000005_0
14/10/08 05:57:49 INFO mapred.LocalJobRunner: map task executor complete.
14/10/08 05:57:49 INFO mapred.LocalJobRunner: Waiting for reduce tasks
14/10/08 05:57:49 INFO mapred.LocalJobRunner: Starting task: attempt_local380762736_0001_r_000000_0
14/10/08 05:57:49 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/08 05:57:49 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: [email protected]
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=363285696, maxSingleShuffleLimit=90821424, mergeThreshold=239768576, ioSortFactor=10, memToMemMergeOutputsThreshold=10
14/10/08 05:57:50 INFO reduce.EventFetcher: attempt_local380762736_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
14/10/08 05:57:50 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local380762736_0001_m_000000_0 decomp: 21 len: 25 to MEMORY
14/10/08 05:57:50 INFO reduce.InMemoryMapOutput: Read 21 bytes from map-output for attempt_local380762736_0001_m_000000_0
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 21, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->21
14/10/08 05:57:50 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local380762736_0001_m_000004_0 decomp: 2 len: 6 to MEMORY
14/10/08 05:57:50 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local380762736_0001_m_000004_0
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 2, commitMemory -> 21, usedMemory ->23
14/10/08 05:57:50 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local380762736_0001_m_000003_0 decomp: 2 len: 6 to MEMORY
14/10/08 05:57:50 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local380762736_0001_m_000003_0
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 3, commitMemory -> 23, usedMemory ->25
14/10/08 05:57:50 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local380762736_0001_m_000005_0 decomp: 2 len: 6 to MEMORY
14/10/08 05:57:50 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local380762736_0001_m_000005_0
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 4, commitMemory -> 25, usedMemory ->27
14/10/08 05:57:50 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local380762736_0001_m_000001_0 decomp: 2 len: 6 to MEMORY
14/10/08 05:57:50 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local380762736_0001_m_000001_0
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 5, commitMemory -> 27, usedMemory ->29
14/10/08 05:57:50 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local380762736_0001_m_000002_0 decomp: 2 len: 6 to MEMORY
14/10/08 05:57:50 INFO reduce.InMemoryMapOutput: Read 2 bytes from map-output for attempt_local380762736_0001_m_000002_0
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 2, inMemoryMapOutputs.size() -> 6, commitMemory -> 29, usedMemory ->31
14/10/08 05:57:50 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
14/10/08 05:57:50 INFO mapred.LocalJobRunner: 6 / 6 copied.
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: finalMerge called with 6 in-memory map-outputs and 0 on-disk map-outputs
14/10/08 05:57:50 INFO mapred.Merger: Merging 6 sorted segments
14/10/08 05:57:50 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: Merged 6 segments, 31 bytes to disk to satisfy reduce memory limit
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: Merging 1 files, 25 bytes from disk
14/10/08 05:57:50 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
14/10/08 05:57:50 INFO mapred.Merger: Merging 1 sorted segments
14/10/08 05:57:50 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 10 bytes
14/10/08 05:57:50 INFO mapred.LocalJobRunner: 6 / 6 copied.
14/10/08 05:57:50 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
14/10/08 05:57:50 INFO mapred.Task: Task:attempt_local380762736_0001_r_000000_0 is done. And is in the process of committing
14/10/08 05:57:50 INFO mapred.LocalJobRunner: 6 / 6 copied.
14/10/08 05:57:50 INFO mapred.Task: Task attempt_local380762736_0001_r_000000_0 is allowed to commit now
14/10/08 05:57:50 INFO output.FileOutputCommitter: Saved output of task 'attempt_local380762736_0001_r_000000_0' to file:/opt/hadoop-2.5.1/grep-temp-913340630/_temporary/0/task_local380762736_0001_r_000000
14/10/08 05:57:50 INFO mapred.LocalJobRunner: reduce > reduce
14/10/08 05:57:50 INFO mapred.Task: Task 'attempt_local380762736_0001_r_000000_0' done.
14/10/08 05:57:50 INFO mapred.LocalJobRunner: Finishing task: attempt_local380762736_0001_r_000000_0
14/10/08 05:57:50 INFO mapred.LocalJobRunner: reduce task executor complete.
14/10/08 05:57:51 INFO mapreduce.Job:  map 100% reduce 100%
14/10/08 05:57:51 INFO mapreduce.Job: Job job_local380762736_0001 completed successfully
14/10/08 05:57:51 INFO mapreduce.Job: Counters: 33
	File System Counters
		FILE: Number of bytes read=114663
		FILE: Number of bytes written=1604636
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
	Map-Reduce Framework
		Map input records=405
		Map output records=1
		Map output bytes=17
		Map output materialized bytes=55
		Input split bytes=657
		Combine input records=1
		Combine output records=1
		Reduce input groups=1
		Reduce shuffle bytes=55
		Reduce input records=1
		Reduce output records=1
		Spilled Records=2
		Shuffled Maps =6
		Failed Shuffles=0
		Merged Map outputs=6
		GC time elapsed (ms)=2359
		CPU time spent (ms)=0
		Physical memory (bytes) snapshot=0
		Virtual memory (bytes) snapshot=0
		Total committed heap usage (bytes)=1106096128
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=15649
	File Output Format Counters 
		Bytes Written=123
14/10/08 05:57:51 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
14/10/08 05:57:51 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
14/10/08 05:57:51 INFO input.FileInputFormat: Total input paths to process : 1
14/10/08 05:57:51 INFO mapreduce.JobSubmitter: number of splits:1
14/10/08 05:57:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local571678604_0002
14/10/08 05:57:51 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/staging/root571678604/.staging/job_local571678604_0002/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/10/08 05:57:51 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/staging/root571678604/.staging/job_local571678604_0002/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/10/08 05:57:52 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local571678604_0002/job_local571678604_0002.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/10/08 05:57:52 WARN conf.Configuration: file:/tmp/hadoop-root/mapred/local/localRunner/root/job_local571678604_0002/job_local571678604_0002.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/10/08 05:57:52 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
14/10/08 05:57:52 INFO mapreduce.Job: Running job: job_local571678604_0002
14/10/08 05:57:52 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/10/08 05:57:52 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
14/10/08 05:57:52 INFO mapred.LocalJobRunner: Waiting for map tasks
14/10/08 05:57:52 INFO mapred.LocalJobRunner: Starting task: attempt_local571678604_0002_m_000000_0
14/10/08 05:57:52 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/10/08 05:57:52 INFO mapred.MapTask: Processing split: file:/opt/hadoop-2.5.1/grep-temp-913340630/part-r-00000:0+111
14/10/08 05:57:52 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14