Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being availa

前端之家收集整理的这篇文章主要介绍了Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being availa前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.2.1:50010],original=[192.168.2.1:50010]). The current Failed datanode replacement policy is DEFAULT,and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

在hadoop hdfs-size.xml 中加入:


<property>

<name>dfs.client.block.write.replace-datanode-on-failure.enable</name> <value>true</value> </property> <property> <name>dfs.client.block.write.replace-datanode-on-failure.policy</name> <value>NEVER</value> </property>

猜你在找的设计模式相关文章