Hadoop異常 hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException
機器環境:ubuntu 11.10 64位
hadoop版本:1.0.1
[email protected]:~/sse/hadoop/hadoop-1.0.1# bin/hadoop fs -put conf input
12/03/15 20:45:37 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/input/configuration.xsl could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
at org.apache.hadoop.ipc.Client.call(Client.java:1066)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
12/03/15 20:45:37 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
12/03/15 20:45:37 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/root/input/configuration.xsl" - Aborting...
put: java.io.IOException: File /user/root/input/configuration.xsl could only be replicated to 0 nodes, instead of 1
12/03/15 20:45:37 ERROR hdfs.DFSClient: Exception closing file /user/root/input/configuration.xsl : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/input/configuration.xsl could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/input/configuration.xsl could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
at org.apache.hadoop.ipc.Client.call(Client.java:1066)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
原因為ubuntu的防火牆沒關閉。
執行sudo ufw disable
執行bin/stop-all.sh
執行bin/start-all.sh
執行:bin/hadoop fs -put conf input
OK!
相關推薦
Hadoop異常 hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException
機器環境:ubuntu 11.10 64位 hadoop版本:1.0.1 [email protected]:~/sse/hadoop/hadoop-1.0.1# bin/hadoop fs -put conf input 12/03/15 20:45:37
Hadoop解決WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException問題
昨天配置完Hadoop環境搭建了集群后,今天跟著視訊操作叢集,啟動叢集沒啥問題,然操作叢集的時候出了問題(上傳檔案失敗) 由於本人是剛學,是個新手,這篇文章有問題之處請大家指出 自己在解決這個問題的時候左弄弄右弄弄被自
WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOExcept
使用web UI執行pi例項驗證hadoop叢集是否啟動成功 報如下錯誤: Number of Maps = 10 Samples per Map = 10 18/09/29 19:32:55 WARN hdfs.DFSClient: DataStreamer Ex
HDFS上傳文件錯誤--hdfs:DFSClient:DataStreamer Exception
.cn .com ges xxx -h 分享 p地址 str exce 今天上傳文件的時候發現傳上去的文件為空,錯誤提示如上述所示,原來是IP地址改掉了對呀應etc/hosts下面的IP地址也要改變,永久改ip命令-ifconfig eth0 xxx·xxx·xxx·xxx
訪問HDFS報錯:org.apache.hadoop.security.AccessControlException: Permission denied
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; public class TestHDFS { publ
hive配置遠端倉庫異常:Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStor
hive配置遠端倉庫啟動時報錯 報錯資訊: Logging initialized using configuration in jar:file:/bigdata/hive/apache-hive-1.2.1-bin/lib/hive-common-1.2.1
hadoop報錯 java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Lja
2018-04-11 16:32:28,514 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1975654255_0001 2018-04-11 16:32:28,5
window上連線叢集跑hadoop問題之java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.
環境: window7 64位 叢集hadoop2.6.0,ubuntu window上連線叢集跑hadoop問題之java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.$Windows. 參照htt
window 上跑hadoop問題之java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.
異常內容: 2018-04-11 16:32:28,514 INFO [org.apache.hadoop.mapreduce.JobSubmitter] - Submitting tokens for job: job_local1975654255_0001 2018-04-11 1
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
檢查 post source 系統 RM 方法 internal .com 2.6 1、window操作系統的eclipse運行wordcount程序出現如下所示的錯誤: Exception in thread "main" java.lang.Unsatisfied
hive MapJoin 異常 : return code 3 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
今天寫了一個hive sql,A表往B表插入資料,如果公共欄位id相同,則不插入,即不存在則插入,否則不插入,這樣一個sql,可是執行時報了記憶體異常, 具體資訊是: 2018-08-14 13:45:17 Starting to launch local task to pro
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: localhost:50010:DataXceiver error processing
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: localhost:50010:DataXceiver error processing WRITE_BLOCK operation src: /192.168.0
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceID
問題的產生: 今天遇到了一個問題 我執行了 https://blog.csdn.net/lzc4869/article/details/Hadoop namenode -format 之後 啟動hadoop: https://blog.csdn.net/lzc4869/art
【HBase】Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Fil
【問題描述】 在使用bulkload方式向HBase匯入資料的時候遇到了如下的錯誤: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Filter at
HDFS上資料儲存到Hbase執行報錯:NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration
把HDFS上資料儲存到Hbase執行報錯!!!! 錯誤如下: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration at com.hado
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/CanUnbuffer
在執行spark on hive 的時候在 sql.show()處報錯 : Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/CanUnbuffer 報錯詳情: Exceptio
hadoop解決Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/util/Apps
linux+eclipse+本地執行WordCount丟擲下面異常: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/util/Apps。 解決:沒有把yar
org.apache.hadoop.ipc.RemoteException(java.io.IOException)異常
最近在除錯flink程式時,發現程式起不來,檢視錯誤日誌和hadoop相關,我的程式與hadoop相關的只有設定了checkpoint的路徑是hdfs的一個目錄路徑。 錯誤日誌最後的錯誤大致是: org.apache.hadoop.i
Hadoop 故障問題 org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted Monitor
正常啟動Hadoop之後發現NameNode和jobtracker自動關閉,檢視日誌後發現出現如下的問題: 2014-04-29 11:49:12,756 INFO org.apache.hadoop.hdfs.server.namenode.DecommissionMa
hadoop異常:org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031
錯誤資訊: 2014-01-01 23:07:09,365 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s