Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 火花“;执行器心跳超时”;_Apache Spark - Fatal编程技术网

Apache spark 火花“;执行器心跳超时”;

Apache spark 火花“;执行器心跳超时”;,apache-spark,Apache Spark,我有一个简单的重复火花误差。(Spark 2.0+亚马逊电子病历5.0供参考) 这在以下情况下失败: ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 169068 ms Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$a

我有一个简单的重复火花误差。(Spark 2.0+亚马逊电子病历5.0供参考)

这在以下情况下失败:

ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 169068 ms
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)

我知道心跳超时错误通常意味着工作人员死亡,通常是由于内存不足。如何解决此问题?

您可以增加执行器和网络超时。另外,如果您没有太多的内存来执行持久化(内存和磁盘),那么建议您这样做,如果没有足够的内存来缓存以将其保存在磁盘上

--conf spark.network.timeout 10000000 --conf spark.executor.heartbeatInterval=10000000   --conf spark.driver.maxResultSize=4g 

在工人身上报告了哪些错误?如果您在集群发生故障后保持其活动状态,并访问Spark history server,您应该能够看到执行器的stderr和stdout。
--conf spark.network.timeout 10000000 --conf spark.executor.heartbeatInterval=10000000   --conf spark.driver.maxResultSize=4g