1. 程式人生 > >Hadoop Error: java.io.IOException: Unable to initialize any output collector

Hadoop Error: java.io.IOException: Unable to initialize any output collector

[[email protected] ~]$ hadoop jar mr.jar cn.hadoop.mr.WCRunner
16/07/24 16:52:08 INFO client.RMProxy: Connecting to ResourceManager at hadoop01/192.168.56.200:8032
16/07/24 16:52:09 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/07/24 16:52:09 INFO input.FileInputFormat: Total input paths to process : 1
16/07/24 16:52:09 INFO mapreduce.JobSubmitter: number of splits:1
16/07/24 16:52:09 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1469343376698_0001
16/07/24 16:52:10 INFO impl.YarnClientImpl: Submitted application application_1469343376698_0001
16/07/24 16:52:10 INFO mapreduce.Job: The url to track the job: http://hadoop01:8088/proxy/application_1469343376698_0001/
16/07/24 16:52:10 INFO mapreduce.Job: Running job: job_1469343376698_0001
16/07/24 16:52:20 INFO mapreduce.Job: Job job_1469343376698_0001 running in uber mode : false
16/07/24 16:52:20 INFO mapreduce.Job:  map 0% reduce 0%
16/07/24 16:52:25 INFO mapreduce.Job: Task Id : attempt_1469343376698_0001_m_000000_0, Status : FAILED
Error: java.io.IOException: Unable to initialize any output collector
	at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:412)
	at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
	at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:695)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

16/07/24 16:52:30 INFO mapreduce.Job: Task Id : attempt_1469343376698_0001_m_000000_1, Status : FAILED
Error: java.io.IOException: Unable to initialize any output collector
	at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:412)
	at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
	at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:695)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

16/07/24 16:52:35 INFO mapreduce.Job: Task Id : attempt_1469343376698_0001_m_000000_2, Status : FAILED
Error: java.io.IOException: Unable to initialize any output collector
	at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:412)
	at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
	at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:695)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:767)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

16/07/24 16:52:42 INFO mapreduce.Job:  map 100% reduce 100%
16/07/24 16:52:42 INFO mapreduce.Job: Job job_1469343376698_0001 failed with state FAILED due to: Task failed task_1469343376698_0001_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

16/07/24 16:52:42 INFO mapreduce.Job: Counters: 9
	Job Counters 
		Failed map tasks=4
		Launched map tasks=4
		Other local map tasks=3
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=15512
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=15512
		Total vcore-seconds taken by all map tasks=15512
		Total megabyte-seconds taken by all map tasks=15884288

錯誤原因如下:

public class WCRunner {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        
        Configuration conf = new Configuration();
        Job wcjob = Job.getInstance(conf);
        
        wcjob.setJarByClass(WCRunner.class);
        
        wcjob.setMapperClass(WCMapper.class);
        wcjob.setReducerClass(WCReducer.class);
        
        wcjob.setOutputKeyClass(Text.class);
        wcjob.setOutputValueClass(LongWritable.class);
        
        wcjob.setMapOutputKeyClass(Text.class);
        wcjob.setMapOutputValueClass(LongWritable.class);
        
        FileInputFormat.setInputPaths(wcjob, "/wc/inputdata/");
        FileOutputFormat.setOutputPath(wcjob, new Path("/wc/output/"));
        
        wcjob.waitForCompletion(true);
    }
}

這裡的Text是import com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider.Text;

正確的型別是import org.apache.hadoop.io.Text;

相關推薦

Hadoop Error: java.io.IOException: Unable to initialize any output collector

[[email protected] ~]$ hadoop jar mr.jar cn.hadoop.mr.WCRunner 16/07/24 16:52:08 INFO client.RMProxy: Connecting to ResourceManager

Unable to initialize any output collector(MapReduce執行到reduce過程失敗丟擲IO)

報錯程式碼如下: [[email protected] mypro]$ hadoop jar flow.jar  /flumcount/input  /flumcount/output 17/03/31 16:29:37 INFO client.RMProxy:

Android7.0以上File.createTempFile異常:java.io.IOException: Unable to create temporary file

一.File.createTempFile的介紹 java IO中常用類File 有File.createTempFile(String prefix, String suffix, File directory) 方法會在指定的目錄中一個新的空檔案。 de

jenkins errorjava.io.IOException: Failed to create a directory at ...

ERROR: Failed to parse POMs java.io.IOException: Failed to create a directory at /var/lib/jenkins/jobs/game_center_dashboard_test/m

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good d

    java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available

ERROR: java.io.IOException: Table Namespace Manager not ready yet, try again later

ERROR: java.io.IOException: Table Namespace Manager not ready yet, try again later at org.apache.hadoop.hbase.master.HMaster.getNamespaceDescriptor(H

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceID

問題的產生: 今天遇到了一個問題 我執行了 https://blog.csdn.net/lzc4869/article/details/Hadoop namenode -format 之後 啟動hadoop: https://blog.csdn.net/lzc4869/art

NDK_報錯 Unable to launch cygpath. Is Cygwin on the path?] java.io.IOException: Cannot run program。

Unable to launch cygpath. Is Cygwin on the path?] java.io.IOException: Cannot run program “cygpath”:

排查Hive報錯:org.apache.hadoop.hive.serde2.SerDeException: java.io.IOException: Start of Array expected

arr .json span 問題 catalog pan 不支持 led open CREATE TABLE json_nested_test ( count string, usage string, pkg map<string

嚴重: Error in dependencyCheck java.io.IOException: invalid header field(tomcat啟動成功可是訪問web項目404錯誤)

check man 空格 .net tle http tom 空行 parent tomcat啟動的時候出現 嚴重: Error in dependencyCheck java.io.IOException: invalid header field

java.io.IOException: Could not locate executable nullinwinutils.exe in the Hadoop binaries.

pan call file 2.2.0 property ade int work ctu 1:這個問題都被大家玩爛了,這裏我也記載一下,方便以後腦補: 1 SLF4J: Class path contains multiple SLF4J bindings.

org.springframework.web.multipart.MultipartException: Failed to parse multipart servlet request; nested exception is java.io.IOException: The temporar

一、異常資訊 ROOT] is not valid at org.springframework.web.multipart.support.StandardMultipartHttpServletRequest.handleParseFailure(StandardMultipartHttpServlet

weblogic異常 之 Caused by: java.io.IOException: [DeploymentService:290066]Error occurred while download

/weblogic/jdk1.7.0_67//bin/java -server -Xms256m -Xmx512m -XX:MaxPermSize=128m -Dweblogic.Name=server_7004 -Djava.security.polic

hadoop java.io.IOException: No FileSystem for scheme: hdfs

異常 java.io.IOException: No FileSystem for scheme: hdfs 我是想用hive的sql解析方法解析sql,測試類是可以執行的,但是使用java -cp

Hadoop錯誤:java.io.IOException: Incompatible clusterIDs

問題: 配置Hadoop叢集時,一個節點的DataNode無法啟動   排查: 檢視hadoop-root-datanode-bigdata114.log檔案,錯誤資訊如下: java.io.IOException: Incompatible clusterIDs in /root/tra

Sqoop異常解決ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No

 問題詳情如下: 解決辦法         這個是由於mysql-connector-java的bug造成的,出錯時我用的是mysql-connector-java-5.1.10-bin.jar,更新成mysql-connector-java-5.1.32-b

hadoop上傳檔案錯誤org.apache.hadoop.ipc.RemoteException(java.io.IOException)

部落格引用處(以下內容在原有部落格基礎上進行補充或更改,謝謝這些大牛的部落格指導): hadoop上傳檔案錯誤org.apache.hadoop.ipc.RemoteException(java.io.IOException) 搭建好hadoop後使用hadoop fs -put 命令上

Eclipse maven hadoop -- java.io.IOException: No FileSystem for scheme: hdfs

2019-01-10   概述   今天在Windows系統下新安裝了Eclipse和maven的環境,想利用Maven構建一個Hadoop程式的,結果卻發現程式執行時一直報 “No FileSystem for scheme: hdfs” 的異常。網友貼出的解決方案在我這

org.apache.hadoop.ipc.RemoteException(java.io.IOException)異常

       最近在除錯flink程式時,發現程式起不來,檢視錯誤日誌和hadoop相關,我的程式與hadoop相關的只有設定了checkpoint的路徑是hdfs的一個目錄路徑。 錯誤日誌最後的錯誤大致是: org.apache.hadoop.i

java.io.IOException: There appears to be a gap in the edit log. We expected txid ***, but got txid

方式1 原因:namenode元資料被破壞,需要修復解決:恢復一下namenode hadoop namenode -recover 一路選擇Y,一般就OK了 方式2 Need to copy the edits file to the journal node (I have 3 journal nod