Apache spark org.apache.spark.util.SparkUncaughtExceptionHandler

Apache spark org.apache.spark.util.SparkUncaughtExceptionHandler,apache-spark,java-8,heap-memory,Apache Spark,Java 8,Heap Memory,运行spark作业时,执行器上出现以下错误。我正在从数据库中读取数据。数据有一个UTF8格式的字符串 迭代器t.next().getString(row.fieldIndex(“short_name”)) 我用10个14G执行器处理100GB的数据。我从12G执行器开始,即使使用14G和3G开销内存,我也会遇到同样的错误 ERROR org.apache.spark.util.SparkUncaughtExceptionHandler - Uncaught exception in threa

运行spark作业时,执行器上出现以下错误。我正在从数据库中读取数据。数据有一个UTF8格式的字符串

迭代器t.next().getString(row.fieldIndex(“short_name”))

我用10个14G执行器处理100GB的数据。我从12G执行器开始,即使使用14G和3G开销内存,我也会遇到同样的错误

ERROR org.apache.spark.util.SparkUncaughtExceptionHandler  - Uncaught exception in thread Thread[Executor task launch worker for task 359,5,main]
java.lang.OutOfMemoryError: Java heap space
    at org.apache.spark.unsafe.types.UTF8String.fromAddress(UTF8String.java:135)
    at org.apache.spark.sql.catalyst.expressions.UnsafeRow.getUTF8String(UnsafeRow.java:419)
    at org.apache.spark.sql.execution.columnar.STRING$.getField(ColumnType.scala:452)
    at org.apache.spark.sql.execution.columnar.STRING$.getField(ColumnType.scala:424)
    at org.apache.spark.sql.execution.columnar.compression.RunLengthEncoding$Encoder.gatherCompressibilityStats(compressionSchemes.scala:194)
    at org.apache.spark.sql.execution.columnar.compression.CompressibleColumnBuilder$$anonfun$gatherCompressibilityStats$1.apply(CompressibleColumnBuilder.scala:74)
    at org.apache.spark.sql.execution.columnar.compression.CompressibleColumnBuilder$$anonfun$gatherCompressibilityStats$1.apply(CompressibleColumnBuilder.scala:74)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.sql.execution.columnar.compression.CompressibleColumnBuilder$class.gatherCompressibilityStats(CompressibleColumnBuilder.scala:74)