Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Spark应用程序因java.lang.OutOfMemoryError失败:java堆空间_Apache Spark_Sparkcore - Fatal编程技术网

Apache spark Spark应用程序因java.lang.OutOfMemoryError失败:java堆空间

Apache spark Spark应用程序因java.lang.OutOfMemoryError失败:java堆空间,apache-spark,sparkcore,Apache Spark,Sparkcore,我有一个spark程序,它使用wholetextfileAPI来读取文件。应用程序使用1个带6GB RAM和4个内核的执行器 当我输入几个小文件(文件的总大小为3GB)时,应用程序成功地处理了所有文件。但如果我输入单个2GB大小的大文件,spark应用程序将失败,并出现以下错误 spark应用程序无法读取大小为2GB的文件?或者我需要更改任何内存设置吗 17/06/14 21:56:22 ERROR Executor: Exception in task 0.46 in stage 0.0 (T

我有一个spark程序,它使用wholetextfileAPI来读取文件。应用程序使用1个带6GB RAM和4个内核的执行器

当我输入几个小文件(文件的总大小为3GB)时,应用程序成功地处理了所有文件。但如果我输入单个2GB大小的大文件,spark应用程序将失败,并出现以下错误

spark应用程序无法读取大小为2GB的文件?或者我需要更改任何内存设置吗

17/06/14 21:56:22 ERROR Executor: Exception in task 0.46 in stage 0.0 (TID 46)
java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:3236)
    at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
    at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
    at org.spark_project.guava.io.ByteStreams.copy(ByteStreams.java:211)
    at org.spark_project.guava.io.ByteStreams.toByteArray(ByteStreams.java:252)
    at org.apache.spark.input.WholeTextFileRecordReader.nextKeyValue(WholeTextFileRecordReader.scala:79)
    at org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader.nextKeyValue(CombineFileRecordReader.java:69)
    at org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:182)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
    at scala.collection.AbstractIterator.to(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
    at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
    at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:912)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:912)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1899)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
17/06/14 21:56:22 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]

请提供您的代码示例好吗?需要一个应用于数据的聚合/转换示例。您使用什么文件格式?我的假设是,您的输入文件是不可拆分的。如果是这样的话,Spark将只拥有与文件数相等的分区数。在这种情况下,拥有多个总量为~3GB的小文件比一个2GB文件更有效。您可以明确指定分区的确切数量来重新分区RDD。我使用的是文本文件格式。如果它是2GB文件,为避免此错误,重新分区的大小是多少?@AKC您解决了这个问题吗?