当PySpark将配置单元表读取到内存时,Apache Arrow OutOfMemoryException

当PySpark将配置单元表读取到内存时,Apache Arrow OutOfMemoryException,pyspark,out-of-memory,pyspark-sql,pyarrow,apache-arrow,Pyspark,Out Of Memory,Pyspark Sql,Pyarrow,Apache Arrow,我搜索了这种错误,但找不到任何关于如何解决它的信息。这是我执行以下两个脚本时得到的结果: org.apache.arrow.memory.OutOfMemoryException: Failure while allocating memory. write.py import pandas as pd from pyspark.sql import SparkSession from os.path import abspath warehouse_location = abspath('

我搜索了这种错误,但找不到任何关于如何解决它的信息。这是我执行以下两个脚本时得到的结果:

org.apache.arrow.memory.OutOfMemoryException: Failure while allocating memory.
write.py

import pandas as pd
from pyspark.sql import SparkSession
from os.path import abspath

warehouse_location = abspath('spark-warehouse')

booksPD = pd.read_csv('books.csv')

spark = SparkSession.builder \
        .appName("MyApp") \
        .master("local[*]") \
        .config("spark.sql.execution.arrow.enabled", "true") \
        .config("spark.driver.maxResultSize", "16g") \
        .config("spark.python.worker.memory", "16g") \
        .config("spark.sql.warehouse.dir", warehouse_location) \
        .enableHiveSupport() \
        .getOrCreate()
spark.sparkContext.setLogLevel("WARN")

spark.createDataFrame(booksPD).write.saveAsTable("books")
spark.catalog.clearCache()
from pyspark.sql import SparkSession
from os.path import abspath

warehouse_location = abspath('spark-warehouse')

spark = SparkSession.builder \
        .appName("MyApp") \
        .master("local[*]") \
        .config("spark.sql.execution.arrow.enabled", "true") \
        .config("spark.driver.maxResultSize", "16g") \
        .config("spark.python.worker.memory", "16g") \
        .config("spark.sql.warehouse.dir", warehouse_location) \
        .enableHiveSupport() \
        .getOrCreate()
spark.sparkContext.setLogLevel("WARN")

books = spark.sql("SELECT * FROM books").toPandas()
read.py

import pandas as pd
from pyspark.sql import SparkSession
from os.path import abspath

warehouse_location = abspath('spark-warehouse')

booksPD = pd.read_csv('books.csv')

spark = SparkSession.builder \
        .appName("MyApp") \
        .master("local[*]") \
        .config("spark.sql.execution.arrow.enabled", "true") \
        .config("spark.driver.maxResultSize", "16g") \
        .config("spark.python.worker.memory", "16g") \
        .config("spark.sql.warehouse.dir", warehouse_location) \
        .enableHiveSupport() \
        .getOrCreate()
spark.sparkContext.setLogLevel("WARN")

spark.createDataFrame(booksPD).write.saveAsTable("books")
spark.catalog.clearCache()
from pyspark.sql import SparkSession
from os.path import abspath

warehouse_location = abspath('spark-warehouse')

spark = SparkSession.builder \
        .appName("MyApp") \
        .master("local[*]") \
        .config("spark.sql.execution.arrow.enabled", "true") \
        .config("spark.driver.maxResultSize", "16g") \
        .config("spark.python.worker.memory", "16g") \
        .config("spark.sql.warehouse.dir", warehouse_location) \
        .enableHiveSupport() \
        .getOrCreate()
spark.sparkContext.setLogLevel("WARN")

books = spark.sql("SELECT * FROM books").toPandas()

最有可能的是,必须增加内存限制。附加以下配置以增加驱动程序和执行程序内存,解决了我的问题

.config("spark.driver.memory", "16g") \
.config("spark.executor.memory", "16g") \

由于程序被配置为在本地模式(
.master(“local[*])
)下运行,因此驱动程序也将获得一些负载,并且需要足够的内存。

最有可能的是,必须增加内存限制。附加以下配置以增加驱动程序和执行程序内存,解决了我的问题

.config("spark.driver.memory", "16g") \
.config("spark.executor.memory", "16g") \
由于程序配置为在本地模式(
.master(“local[*])
)下运行,因此驱动程序也将获得部分负载,并且需要足够的内存