Python 遇到';java.lang.OutOfMemoryError:java堆空间';使用toPandas()和databricks连接时

Python 遇到';java.lang.OutOfMemoryError:java堆空间';使用toPandas()和databricks连接时,python,pandas,pyspark,databricks,databricks-connect,Python,Pandas,Pyspark,Databricks,Databricks Connect,我正在尝试将大小为[2734984行x 11列]的pyspark数据帧转换为调用toPandas()的pandas数据帧。当使用Azure Databricks笔记本时,它工作得非常好(11秒),但当我使用Databricks connect运行完全相同的代码时,我遇到了java.lang.OutOfMemoryError:java堆空间异常(db connect版本和Databricks运行时版本匹配,都是7.1) 我已经增加了spark驱动程序内存(100g)和maxResultSize(1

我正在尝试将大小为[2734984行x 11列]的pyspark数据帧转换为调用
toPandas()
的pandas数据帧。当使用Azure Databricks笔记本时,它工作得非常好(11秒),但当我使用Databricks connect运行完全相同的代码时,我遇到了
java.lang.OutOfMemoryError:java堆空间
异常(db connect版本和Databricks运行时版本匹配,都是7.1)

我已经增加了spark驱动程序内存(100g)和maxResultSize(15g)。我认为错误在databricks connect中的某个地方,因为我无法使用笔记本复制它

有什么线索吗

错误如下:

Exception in thread "serve-Arrow" java.lang.OutOfMemoryError: Java heap space
    at com.ning.compress.lzf.ChunkDecoder.decode(ChunkDecoder.java:51)
    at com.ning.compress.lzf.LZFDecoder.decode(LZFDecoder.java:102)
    at com.databricks.service.SparkServiceRPCClient.executeRPC0(SparkServiceRPCClient.scala:84)
    at com.databricks.service.SparkServiceRemoteFuncRunner.withRpcRetries(SparkServiceRemoteFuncRunner.scala:234)
    at com.databricks.service.SparkServiceRemoteFuncRunner.executeRPC(SparkServiceRemoteFuncRunner.scala:156)
    at com.databricks.service.SparkServiceRemoteFuncRunner.executeRPCHandleCancels(SparkServiceRemoteFuncRunner.scala:287)
    at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute0$1(SparkServiceRemoteFuncRunner.scala:118)
    at com.databricks.service.SparkServiceRemoteFuncRunner$$Lambda$934/2145652039.apply(Unknown Source)
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
    at com.databricks.service.SparkServiceRemoteFuncRunner.withRetry(SparkServiceRemoteFuncRunner.scala:135)
    at com.databricks.service.SparkServiceRemoteFuncRunner.execute0(SparkServiceRemoteFuncRunner.scala:113)
    at com.databricks.service.SparkServiceRemoteFuncRunner.$anonfun$execute$1(SparkServiceRemoteFuncRunner.scala:86)
    at com.databricks.service.SparkServiceRemoteFuncRunner$$Lambda$1031/465320026.apply(Unknown Source)
    at com.databricks.spark.util.Log4jUsageLogger.recordOperation(UsageLogger.scala:210)
    at com.databricks.spark.util.UsageLogging.recordOperation(UsageLogger.scala:346)
    at com.databricks.spark.util.UsageLogging.recordOperation$(UsageLogger.scala:325)
    at com.databricks.service.SparkServiceRPCClientStub.recordOperation(SparkServiceRPCClientStub.scala:61)
    at com.databricks.service.SparkServiceRemoteFuncRunner.execute(SparkServiceRemoteFuncRunner.scala:78)
    at com.databricks.service.SparkServiceRemoteFuncRunner.execute$(SparkServiceRemoteFuncRunner.scala:67)
    at com.databricks.service.SparkServiceRPCClientStub.execute(SparkServiceRPCClientStub.scala:61)
    at com.databricks.service.SparkServiceRPCClientStub.executeRDD(SparkServiceRPCClientStub.scala:225)
    at com.databricks.service.SparkClient$.executeRDD(SparkClient.scala:279)
    at com.databricks.spark.util.SparkClientContext$.executeRDD(SparkClientContext.scala:161)
    at org.apache.spark.scheduler.DAGScheduler.submitJob(DAGScheduler.scala:864)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:928)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2331)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2426)
    at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$6(Dataset.scala:3638)
    at org.apache.spark.sql.Dataset$$Lambda$3567/1086808304.apply$mcV$sp(Unknown Source)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1581)
    at org.apache.spark.sql.Dataset.$anonfun$collectAsArrowToPython$3(Dataset.scala:3642)```

这可能是因为Databricks connect正在客户端计算机上执行toPandas,而客户端计算机可能会耗尽内存。您可以通过在(本地)配置文件
${spark\u home}/conf/spark defaults.conf
中设置
spark.driver.memory
来增加本地驱动程序内存,其中
${spark\u home}
可以通过
databricks connect get spark home
获得
,效果不错!我怀疑这样的事情,但完全不知道从哪里开始。对于有相同问题的人:在我的例子中,
\conf\spark defaults.conf
不存在,只是创建了它并插入了以下一行:
spark.driver.memory 10g
。当运行toPandas时,我可以清楚地看到本地机器上内存负载的增加。再次感谢!