Python 火花:有可能增加pyarrow缓冲区吗?

Python 火花:有可能增加pyarrow缓冲区吗?,python,pandas,apache-spark,pyspark,apache-spark-sql,Python,Pandas,Apache Spark,Pyspark,Apache Spark Sql,我正在尝试将一个大(~30GB)数据帧传递给spark中的pandas_udf,如下所示: @f.pandas_udf(gen_udf_schema(), f.PandasUDFType.GROUPED_MAP) def _my_udf(df): # ... do df work ... return df df = df.groupBy('some_col').apply(_my_udf) 我尝试增加我的执行器内存、驱动程序内存和驱动程序maxResultSize,但我仍然

我正在尝试将一个大(~30GB)数据帧传递给spark中的pandas_udf,如下所示:

@f.pandas_udf(gen_udf_schema(), f.PandasUDFType.GROUPED_MAP)
def _my_udf(df):
    # ... do df work ...
    return df

df = df.groupBy('some_col').apply(_my_udf)
我尝试增加我的执行器内存、驱动程序内存和驱动程序maxResultSize,但我仍然在集群中得到下面详述的pyarrow内存错误。是否存在驱动程序maxResultSize的等效项,即我可以使用执行器maxResultSize来避免此错误?网上似乎没有多少关于这方面的信息

我无法拆分数据帧,因为它实际上是1个小数据帧和1个约29GB的大数据帧的并集。在我的udf中,我将二者分开并完成我的工作,然后只返回小数据帧

y4j.protocol.Py4JJavaError: An error occurred while calling o324.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 44 in stage 19.0 failed 4 times, most recent failure: Lost task 44.3 in stage 19.0 (TID 368, ip-172-31-13-57.us-west-2.compute.internal, executor 3): org.apache.arrow.vector.util.OversizedAllocationException: Unable to expand the buffer
    at org.apache.arrow.vector.BaseVariableWidthVector.reallocBufferHelper(BaseVariableWidthVector.java:547)
    at org.apache.arrow.vector.BaseVariableWidthVector.reallocValidityAndOffsetBuffers(BaseVariableWidthVector.java:529)
    at org.apache.arrow.vector.BaseVariableWidthVector.handleSafe(BaseVariableWidthVector.java:1221)
    at org.apache.arrow.vector.BaseVariableWidthVector.fillEmpties(BaseVariableWidthVector.java:881)
    at org.apache.arrow.vector.BaseVariableWidthVector.setSafe(BaseVariableWidthVector.java:1062)
    at org.apache.spark.sql.execution.arrow.StringWriter.setValue(ArrowWriter.scala:242)
    at org.apache.spark.sql.execution.arrow.ArrowFieldWriter.write(ArrowWriter.scala:121)
    at org.apache.spark.sql.execution.arrow.ArrowWriter.write(ArrowWriter.scala:86)
    at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$2$$anonfun$writeIteratorToStream$1.apply$mcV$sp(ArrowPythonRunner.scala:85)
    at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$2$$anonfun$writeIteratorToStream$1.apply(ArrowPythonRunner.scala:76)
    at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$2$$anonfun$writeIteratorToStream$1.apply(ArrowPythonRunner.scala:76)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$2.writeIteratorToStream(ArrowPythonRunner.scala:96)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:345)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1945)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:194)

您的模式是什么样子的?当前Apache Arrow的java库对列有一些限制(它们不能容纳超过2GB)。@MicahKornfield为迟来的回复道歉,我有两个字符串列,其中包含9900000行的“stringized”JSON。我有两个字符串列,其中包含9900000行的“stringized”JSON(每行大约490字节,这导致这些列的容量约为2.6GB)