You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.
Action starts at Wed Jun 06 10:40:54 CST 2018 : Read /src/t2
Append to /src/t2
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.239.12.140:50010,DS-d2efbea0-5fb7-46fa-8b16-6c54364fe67c,DISK]], original=[DatanodeInfoWithStorage[10.239.12.140:50010,DS-d2efbea0-5fb7-46fa-8b16-6c54364fe67c,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:925)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:988)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1156)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)
The text was updated successfully, but these errors were encountered:
Will check details in local env.
I guess this issue is caused by replication number greater than living datanode. For example, replication=3, while datanode number is 2.
append -length 100 -file /src/t2
Log
Action starts at Wed Jun 06 10:40:54 CST 2018 : Read /src/t2
Append to /src/t2
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[10.239.12.140:50010,DS-d2efbea0-5fb7-46fa-8b16-6c54364fe67c,DISK]], original=[DatanodeInfoWithStorage[10.239.12.140:50010,DS-d2efbea0-5fb7-46fa-8b16-6c54364fe67c,DISK]]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:925)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:988)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1156)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:454)
The text was updated successfully, but these errors were encountered: