Apache spark pyspark-parquet中的覆盖模式删除其他分区

Apache spark pyspark-parquet中的覆盖模式删除其他分区,apache-spark,pyspark,Apache Spark,Pyspark,我正在使用pyspark覆盖s3存储桶中的拼花地板分区。下面是我的分区文件夹的外观: parent_folder -> year=2019 -->month=1 ---->date=2019-01-01 ---->date=2019-01-02 -->month=2 ........ ->

我正在使用pyspark覆盖s3存储桶中的拼花地板分区。下面是我的分区文件夹的外观:

parent_folder
      -> year=2019 
            -->month=1
                ---->date=2019-01-01
                ---->date=2019-01-02
            -->month=2
                 ........
      -> year=2020
            -->month=1
                    ---->date=2020-01-01
                    ---->date=2020-01-02
            -->month=2
                    ........
现在,当我运行一个spark脚本,该脚本需要使用下面的行只覆盖特定的分区时,让我们假设year=2020和month=1以及dates=2020-01-01和2020-01-02的分区:

df_final.write.partitionBy([["year","month","date"]"]).mode("overwrite").format("parquet").save(output_dir_path)
上行将删除所有其他分区,并写回仅存在于最终数据帧中的数据-df_final。我还使用以下命令将覆盖模型设置为动态,但似乎不起作用:

conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")

我的问题是,有没有办法只覆盖特定的分区(不止一个分区)。任何帮助都将不胜感激。提前谢谢

我想,您正在寻找一种解决方案,用户可以使用sparksql插入和覆盖拼花地板表中的现有分区,希望拼花地板的末尾是指分区的配置单元表

您可以通过启用配置单元支持来创建spark sql上下文,下面是相同的步骤,这一步是基于精确代码的,但与相同的sudo代码类似

spark = SparkSession
  .builder()
  .appName("Spark Hive Example")
  .config("spark.sql.warehouse.dir", warehouseLocation)
  .enableHiveSupport()
  .getOrCreate()

sqlCtx = SQLContext(spark)

try:
   df = spark.read.parquet("filename.parquet")
   df.createOrReplaceTempView("temp")

   insertQuery = """INSERT OVERWRITE INTO TABLE {}.{} PARTITION (part1, part2) \
                    SELECT *
                    FROM temp a""".format(hiveSchema, hiveTable)
   sqlCtx.sql(insertQuery)
except:
   logger.error("Error while loading data into table..")
   exit()
我已经写了一个同样的例子。下面是hive的插入覆盖功能,您可以选择与您相关的任何一项

Hive extension (multiple inserts):
FROM from_statement
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2]
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2] ...;
FROM from_statement
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2]
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2] ...;

Hive extension (dynamic partition inserts):
INSERT OVERWRITE TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;

我想,您正在寻找一种解决方案,用户可以使用sparksql插入和覆盖拼花地板表中的现有分区,希望拼花地板的末尾是指分区的配置单元表

您可以通过启用配置单元支持来创建spark sql上下文,下面是相同的步骤,这一步是基于精确代码的,但与相同的sudo代码类似

spark = SparkSession
  .builder()
  .appName("Spark Hive Example")
  .config("spark.sql.warehouse.dir", warehouseLocation)
  .enableHiveSupport()
  .getOrCreate()

sqlCtx = SQLContext(spark)

try:
   df = spark.read.parquet("filename.parquet")
   df.createOrReplaceTempView("temp")

   insertQuery = """INSERT OVERWRITE INTO TABLE {}.{} PARTITION (part1, part2) \
                    SELECT *
                    FROM temp a""".format(hiveSchema, hiveTable)
   sqlCtx.sql(insertQuery)
except:
   logger.error("Error while loading data into table..")
   exit()
我已经写了一个同样的例子。下面是hive的插入覆盖功能,您可以选择与您相关的任何一项

Hive extension (multiple inserts):
FROM from_statement
INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...) [IF NOT EXISTS]] select_statement1
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2]
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2] ...;
FROM from_statement
INSERT INTO TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1
[INSERT INTO TABLE tablename2 [PARTITION ...] select_statement2]
[INSERT OVERWRITE TABLE tablename2 [PARTITION ... [IF NOT EXISTS]] select_statement2] ...;

Hive extension (dynamic partition inserts):
INSERT OVERWRITE TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;
INSERT INTO TABLE tablename PARTITION (partcol1[=val1], partcol2[=val2] ...) select_statement FROM from_statement;

spark.sparkContext.getConf().get('spark.sql.sources.partitionOverwriteMode')
说明了什么?我在一个项目中遇到了这个问题。这很有用。
spark.sparkContext.getConf().get('spark.sql.sources.partitionOverwriteMode')
显示了什么?我在一个项目中偶然遇到了这个问题。这是有用的。