Pyspark:使用窗口功能按日期将数据框保存到单个csv?

Pyspark:使用窗口功能按日期将数据框保存到单个csv?,pyspark,apache-spark-sql,pyspark-dataframes,Pyspark,Apache Spark Sql,Pyspark Dataframes,我有这样一个数据帧: df = pd.DataFrame({"Date": ["2020-05-10", "2020-05-10", "2020-05-10", "2020-05-11", "2020-05-11", "2020-05-11", "2020-05-11", "2020-05-11", "2020-05-11"], "Slot_Length": [30, 30, 30, 30, 30, 30, 30, 30, 30],

我有这样一个数据帧:

df = pd.DataFrame({"Date": ["2020-05-10", "2020-05-10", "2020-05-10", "2020-05-11", "2020-05-11", "2020-05-11", "2020-05-11", "2020-05-11", "2020-05-11"],
                   "Slot_Length": [30, 30, 30, 30, 30, 30, 30, 30, 30],
                   "Total_Space": [60, 60, 60, 120, 120, 120, 120, 120, 120],
                   "Amount_Over": [-30, -30, -30, -60, -60, -60, -60, -60, -60],
                   "Rank": [1, 1, 2, 1, 1, 1, 1, 2, 2]})

df = spark.createDataFrame(df)

+----------+-----------+-----------+-----------+----+
|      Date|Slot_Length|Total_Space|Amount_Over|Rank|
+----------+-----------+-----------+-----------+----+
|2020-05-10|         30|         60|        -30|   1|
|2020-05-10|         30|         60|        -30|   1|
|2020-05-10|         30|         60|        -30|   2|
|2020-05-11|         30|        120|        -60|   1|
|2020-05-11|         30|        120|        -60|   1|
|2020-05-11|         30|        120|        -60|   1|
|2020-05-11|         30|        120|        -60|   1|
|2020-05-11|         30|        120|        -60|   2|
|2020-05-11|         30|        120|        -60|   2|
+----------+-----------+-----------+-----------+----+
df.coalesce(1).write.format("com.databricks.spark.csv"
                                       ).mode('overwrite'
                                             ).option("header", "true"
                                               ).save("s3://mycsv_date.csv")
我知道我可以将数据帧保存到单个csv文件,如下所示:

df = pd.DataFrame({"Date": ["2020-05-10", "2020-05-10", "2020-05-10", "2020-05-11", "2020-05-11", "2020-05-11", "2020-05-11", "2020-05-11", "2020-05-11"],
                   "Slot_Length": [30, 30, 30, 30, 30, 30, 30, 30, 30],
                   "Total_Space": [60, 60, 60, 120, 120, 120, 120, 120, 120],
                   "Amount_Over": [-30, -30, -30, -60, -60, -60, -60, -60, -60],
                   "Rank": [1, 1, 2, 1, 1, 1, 1, 2, 2]})

df = spark.createDataFrame(df)

+----------+-----------+-----------+-----------+----+
|      Date|Slot_Length|Total_Space|Amount_Over|Rank|
+----------+-----------+-----------+-----------+----+
|2020-05-10|         30|         60|        -30|   1|
|2020-05-10|         30|         60|        -30|   1|
|2020-05-10|         30|         60|        -30|   2|
|2020-05-11|         30|        120|        -60|   1|
|2020-05-11|         30|        120|        -60|   1|
|2020-05-11|         30|        120|        -60|   1|
|2020-05-11|         30|        120|        -60|   1|
|2020-05-11|         30|        120|        -60|   2|
|2020-05-11|         30|        120|        -60|   2|
+----------+-----------+-----------+-----------+----+
df.coalesce(1).write.format("com.databricks.spark.csv"
                                       ).mode('overwrite'
                                             ).option("header", "true"
                                               ).save("s3://mycsv_date.csv")
我想按日期将我的数据框分解,并将每个过滤后的数据框保存到csv

mycsv_2020_05_10.csv
mycsv_2020_05_11.csv
最好的方法是什么?

使用

df.repartition('Date').write.partitionBy('Date').format("com.databricks.spark.csv"
                                       ).mode('overwrite'
                                             ).option("header", "true"
                                               ).save("s3://bucket/path")

现在,您将使用
.partitionBy(“date”)
子句在每个分区中拥有每个日期的文件夹和单个文件