我试图附加到单个节点集群上的hdfs上的文件.我也尝试了一个2节点集群,但得到相同的例外.
在hdfs-site中,我将dfs.replication设置为1.如果我将dfs.client.block.write.replace-datanode-on-failure.policy设置为DEFAULT,我得到以下异常
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.10.37.16:50010],original=[10.10.37.16:50010]). The current Failed datanode replacement policy is DEFAULT,and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
如果我按照configuration in hdfs-default.xml的评论为非常小的集群(3个节点或更少)的建议,并设置dfs.client.block.write.replace-datanode-on-failure.policy,以至于我得不到以下异常:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot append to file/user/hadoop/test. Name node is in safe mode. The reported blocks 1277 has reached the threshold 1.0000 of total blocks 1277. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 3 seconds.
这是我如何追加:
Configuration conf = new Configuration(); conf.set("fs.defaultFS","hdfs://MY-MACHINE:8020/user/hadoop"); conf.set("hadoop.job.ugi","hadoop"); FileSystem fs = FileSystem.get(conf); OutputStream out = fs.append(new Path("/user/hadoop/test")); PrintWriter writer = new PrintWriter(out); writer.print("hello world"); writer.close();
有什么我在代码中做错了吗?
也许,在配置中有些东西丢失?
任何帮助将不胜感激!
编辑
即使dfs.replication设置为1,当我检查文件的状态
FileStatus[] status = fs.listStatus(new Path("/user/hadoop"));
我发现状态[i] .block_replication设置为3.我不认为这是问题,因为当我将dfs.replication的值更改为0时,我得到了一个相关的异常.所以显然它确实服从dfs.replication的价值,但是要安全起见,是否有办法更改每个文件的block_replication值?
解决方法
正如我在编辑中提到的.即使dfs.replication设置为1,fileStatus.block_replication设置为3.
一个可能的解决方案是运行
hadoop fs -setrep -w 1 -R /user/hadoop/
这将在给定目录中递归地更改每个文件的复制因子.该命令的文档可以在here中找到.
现在要做的是查看为什么忽略hdfs-site.xml中的值.并且如何强制值1为默认值.
编辑
事实证明,dfs.replication属性必须在配置实例中设置,否则它要求文件的复制因子是默认值,不管在hdfs-site.xml中设置的值为3
conf.set("dfs.replication","1");