Python 将RDD转换为数据帧时的java.lang.StackOverflower错误
试图为大量RDD文档计算tf idf分数,每当我试图将其转换为数据帧时,它总是崩溃。我得到的初始错误是Python 将RDD转换为数据帧时的java.lang.StackOverflower错误,python,dataframe,rdd,pyspark-sql,Python,Dataframe,Rdd,Pyspark Sql,试图为大量RDD文档计算tf idf分数,每当我试图将其转换为数据帧时,它总是崩溃。我得到的初始错误是 org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.StackOverflowError 接着,重复了很多次: at java.io.ObjectOutputStream.writeSerialData(ObjectOut
org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.StackOverflowError
接着,重复了很多次:
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
接
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1171)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:1069)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:1013)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2067)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:153)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
我做了一些研究,似乎这个与dataframe关联的DAG(有向无环图)太大了,我应该对数据进行某种缓存/检查点设置/持久化来解决它。还是每次都崩溃。为了避免混淆问题,我在下面的代码中省略了缓存/检查点/持久化行:
from pyspark.sql import SQLContext
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('app').getOrCreate()
rdd = spark.sparkContext.parallelize([])
data = []
count = 0
for sentence in giant_list_of_sentences:
words = sentence.split(' ')
data.append((words, count)) #Count is the index of the document
count += 1
if (len(data) > 5000):
rdd = rdd.union(spark.sparkContext.parallelize(data))
if (len(data) > 0):
rdd = rdd.union(spark.sparkContext.parallelize(data))
df_txts = sqlContext.createDataFrame(data, ["list_of_words",'index'])
总是到达最后一行,然后失败,除非使其仅在一小部分数据上正常运行。因此解决方案实际上非常简单-结果表明,将一个巨大的RDD转换为一个巨大的数据帧很困难,但将几个较小的RDD转换为几个较小的数据帧,然后加入数据帧,效果很好
from pyspark.sql import SQLContext
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('app').getOrCreate()
rdds = [spark.sparkContext.parallelize([])]*6
data = []
count = 0
turn = 0
for sentence in giant_list_of_sentences:
words = sentence.split(' ')
data.append((words, count)) #Count is the index of the document
count += 1
if (len(data) > 5000):
rdds[turn] = rdds[turn].union(spark.sparkContext.parallelize(data))
if (len(data) > 0):
rdds[turn] = rdds[turn].union(spark.sparkContext.parallelize(data))
df_txts = rdds[0].toDF(['list_of_words', 'index'])
for i in range(1, len(rdds)):
df_txts = df_txts.union(rdds[i].toDF(['list_of_words', 'index'])
df_txts = sqlContext.createDataFrame(data, ["list_of_words",'index'])