Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 火花:不允许HDFS自抑制_Hadoop_Apache Spark_Hdfs - Fatal编程技术网

Hadoop 火花:不允许HDFS自抑制

Hadoop 火花:不允许HDFS自抑制,hadoop,apache-spark,hdfs,Hadoop,Apache Spark,Hdfs,我收到HDFS错误 Self-suppression not permitted, Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try 当我在4台机器集群上运行Spark进程时。HDFS由纱线管理,但Spark使用自己的集群运行(因此不由纱线管理) 它发生在流程完成的80%左右 这是否表明HDFS对于Spark来说“太慢” 更

我收到HDFS错误

Self-suppression not permitted, Failed to replace a bad datanode on the
existing pipeline due to no more good datanodes being available to try
当我在4台机器集群上运行Spark进程时。HDFS由纱线管理,但Spark使用自己的集群运行(因此不由纱线管理)

它发生在流程完成的80%左右

这是否表明HDFS对于Spark来说“太慢”

更新

我现在尝试的潜在解决方案是将以下XML片段添加到hdfs-site.XML中:

    <property>
        <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>
        <value>ALWAYS</value>
    </property>
    <property>
        <name>dfs.client.block.write.replace-datanode-on-failure.best-effort</name>
        <value>true</value>
    </property>

dfs.client.block.write.replace-datanode-on-failure.policy
总是
dfs.client.block.write.replace-datanode-on-failure.best-effort
真的

我尝试了上述设置,但仍然出现上述错误。您能否确认此解决方案是否适用于您,或者您是如何修复的。