Memory leaks Spark Executor:检测到托管内存泄漏

Memory leaks Spark Executor:检测到托管内存泄漏,memory-leaks,apache-spark,apache-kafka,spark-streaming,mesos,Memory Leaks,Apache Spark,Apache Kafka,Spark Streaming,Mesos,我正在使用mesos群集部署spark作业(客户端模式)。我有三台服务器,能够运行spark作业。然而,过了一段时间(几天),我发现了一个错误: 5/11/03 19:55:50 ERROR Executor: Managed memory leak detected; size = 33554432 bytes, TID = 387939 15/11/03 19:55:50 ERROR Executor: Exception in task 2.1 in stage 6534.0 (TID 3

我正在使用mesos群集部署spark作业(客户端模式)。我有三台服务器,能够运行spark作业。然而,过了一段时间(几天),我发现了一个错误:

5/11/03 19:55:50 ERROR Executor: Managed memory leak detected; size = 33554432 bytes, TID = 387939
15/11/03 19:55:50 ERROR Executor: Exception in task 2.1 in stage 6534.0 (TID 387939)
java.io.FileNotFoundException: /tmp/blockmgr-3acec504-4a55-4aa8-a3e5-dda97ce5d055/03/temp_shuffle_cb37f147-c055-4014-a6ae-fd505cb49f57 (Too many open files)
    at java.io.FileOutputStream.open(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:221)
    at org.apache.spark.storage.DiskBlockObjectWriter.open(DiskBlockObjectWriter.scala:88)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.insertAll(BypassMergeSortShuffleWriter.java:110)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
ERROR Executor: Exception in task 5.3 in stage 7561.0 (TID 392220)
org.apache.spark.SparkException: Couldn't connect to leader for topic bid_inventory 9: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
    at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator$$anonfun$connectLeader$1.apply(KafkaRDD.scala:164)
    at org.apache.spark.streaming.kafka.KafkaRDD$KafkaRDDIterator$$anonfun$connectLeader$1.apply(KafkaRDD.scala:164)    
但一旦重启,一切都开始正常工作


有人知道为什么会这样吗?谢谢。

我在Spark中处理数据帧时也遇到了内存泄漏错误。我不知道如何排除故障,因为它没有提供泄漏发生的位置或原因的信息。这是一个针对Spark开发者的Spark bug,即我们不应该看到这一点。看见