将pyspark数据帧转换为pandas数据帧

将pyspark数据帧转换为pandas数据帧,pandas,pyspark,apache-spark-2.3,Pandas,Pyspark,Apache Spark 2.3,我有pyspark dataframe,其维度为(28002528,21),并尝试使用以下代码行将其转换为pandas dataframe: pd_df=spark_df.toPandas() 我得到了这个错误: 第一部分 Py4JJavaError: An error occurred while calling o170.collectToPython. : org.apache.spark.SparkException: Job aborted due to stage failure:

我有pyspark dataframe,其维度为(28002528,21),并尝试使用以下代码行将其转换为pandas dataframe:

pd_df=spark_df.toPandas()
我得到了这个错误:

第一部分

Py4JJavaError: An error occurred while calling o170.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 39.0 failed 1 times, most recent failure: Lost task 3.0 in stage 39.0 (TID 89, localhost, executor driver): java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:3236)
    at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
    at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
    at net.jpountz.lz4.LZ4BlockOutputStream.flushBufferedData(LZ4BlockOutputStream.java:220)
    at net.jpountz.lz4.LZ4BlockOutputStream.write(LZ4BlockOutputStream.java:173)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at org.apache.spark.sql.catalyst.expressions.UnsafeRow.writeToStream(UnsafeRow.java:552)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:256)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:830)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)


Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599)
        ...
        ...

Caused by: java.lang.OutOfMemoryError: Java heap space
        ...
        ...    
第二部分

Exception happened during processing of request from ('127.0.0.1', 56842)
ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server (127.0.0.1:56657)
Traceback (most recent call last):
        ...
        ...    
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

During handling of the above exception, another exception occurred:
        ...
        ...
我还试着采集原始pyspark数据帧的样本

smaple_pd_df=spark_df.sample(0.05).toPandas()
我得到了一个错误,看起来像上一个错误的第一部分
java.lang.OutOfMemoryError
这可能意味着您试图将所有数据加载到一个没有足够RAM来处理整个数据帧的节点中。如果您使用的是云解决方案提供商,如DataRicks,请尝试增加集群RAM的大小。

toPandas()所做的是将整个数据帧收集到单个节点中(如@ulmefors的回答中所述)

更具体地说,它会将其收集给驱动程序。您应该微调的具体选项是
spark.driver.memory
,相应地增加它

否则,如果你计划在这个(相当大的)熊猫数据文件上做进一步的转换,你可以考虑先在PyScFrar中执行,然后将(较小的)结果收集到驱动程序中,希望这将符合内存。 有关更多详细信息,请参阅Spark配置文档