1. 程式人生 > >Hadoop執行HDFS命令時報錯

Hadoop執行HDFS命令時報錯

報錯如下:
WARN hdfs.DataStreamer: DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.33.102:50010,DS-e11bb105-fbb8-4b44-8759-cac614ddceb0,DISK], DatanodeInfoWithStorage[192.168.33.103:50010,DS-721ca7d3-d6a2-4a95-8156-dcbc3bbf7316,DISK]], original=[DatanodeInfoWithStorage[192.168.33.103:50010,DS-721ca7d3-d6a2-4a95-8156-dcbc3bbf7316,DISK], DatanodeInfoWithStorage[192.168.33.102:50010,DS-e11bb105-fbb8-4b44-8759-cac614ddceb0,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via ‘dfs.client.block.write.replace-datanode-on-failure.policy’ in its configuration.atorg.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1228)at org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1298) at org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1473) at org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1387) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:662)

解決方法:
修改hdfs-site.xml檔案,新增或者修改如下兩項:

dfs.client.block.write.replace-datanode-on-failure.enable true


dfs.client.block.write.replace-datanode-on-failure.policy
NEVER