Amazon ec2 Apache Spark EC2作业未运行。设备上没有剩余空间

Amazon ec2 Apache Spark EC2作业未运行。设备上没有剩余空间,amazon-ec2,apache-spark,Amazon Ec2,Apache Spark,我已经在一个20节点的集群上多次运行我的程序。每次运行该程序时,突然出现以下错误: 15/04/19 16:52:35 WARN scheduler.TaskSetManager: Lost task 35.0 in stage 9.0 (TID 384, ip-XXX.XXX.compute.internal): java.io.FileNotFoundException: /mnt/spark/spark-local-XXX-ebd3/18/shuffle_2_35_64 (No space

我已经在一个20节点的集群上多次运行我的程序。每次运行该程序时,突然出现以下错误:

15/04/19 16:52:35 WARN scheduler.TaskSetManager: Lost task 35.0 in stage 9.0 (TID 384, ip-XXX.XXX.compute.internal): java.io.FileNotFoundException: /mnt/spark/spark-local-XXX-ebd3/18/shuffle_2_35_64 (No space left on device)
    java.io.FileOutputStream.open(Native Method)
    java.io.FileOutputStream.<init>(FileOutputStream.java:221)
    org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:123)
    org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:192)
    org.apache.spark.shuffle.hash.HashShuffleWriter$$anonfun$write$1.apply(HashShuffleWriter.scala:67)
    org.apache.spark.shuffle.hash.HashShuffleWriter$$anonfun$write$1.apply(HashShuffleWriter.scala:65)
    scala.collection.Iterator$class.foreach(Iterator.scala:727)
    scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
    org.apache.spark.shuffle.hash.HashShuffleWriter.write(HashShuffleWriter.scala:65)
    org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
    org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    org.apache.spark.scheduler.Task.run(Task.scala:54)
    org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    java.lang.Thread.run(Thread.java:745)
15/04/19 16:52:35警告调度程序.TaskSetManager:在9.0阶段(TID 384,ip XXX.XXX.compute.internal)丢失任务35.0:java.io.FileNotFoundException:/mnt/spark/spark-local-XXX-ebd3/18/shufle\u 2\u 35\u 64(设备上没有剩余空间)
java.io.FileOutputStream.open(本机方法)
FileOutputStream.java.io.FileOutputStream。(FileOutputStream.java:221)
org.apache.spark.storage.DiskBlockObjectWriter.open(BlockObjectWriter.scala:123)
org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala:192)
org.apache.spark.shuffle.hash.hashufflewwriter$$anonfun$write$1.apply(hashufflewwriter.scala:67)
org.apache.spark.shuffle.hash.hashufflewwriter$$anonfun$write$1.apply(hashufflewwriter.scala:65)
scala.collection.Iterator$class.foreach(Iterator.scala:727)
scala.collection.AbstractIterator.foreach(迭代器.scala:1157)
org.apache.spark.shuffle.hash.hashufflewwriter.write(hashufflewwriter.scala:65)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.executor$TaskRunner.run(executor.scala:178)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
run(Thread.java:745)
检查用户界面,它说节点上什么都没有。我可能已经运行了15次程序,只是突然之间启动了。为什么会突然发生这种情况?如何修复它?

“设备上没有剩余空间”是一个非常明显的例外:该节点在写入spark本地文件的挂载上没有剩余空间:
/mnt/spark/

解决方案:转到节点(或多个节点)并将其清理干净<代码>rm-rfFTW


如果由于手动干预或故障,作业在终止前中断,它们通常会留下临时数据。

正在写入哪些本地文件?我添加了:`conf.set(“spark.shuffle.consolidateFiles”,“true”)`到目前为止,它工作正常。然而,有一次它发生了,我真的只是再次运行这个程序,它运行得很好。那么,导致数据空间不足的数据发生了什么?还有,我该如何去节点?我使用的是spark-ec2脚本。您需要使用PEM证书
ssh
进入节点-这与spark无关。大量资源: