Pyspark:在groupby中创建值集的新列

Pyspark:在groupby中创建值集的新列,pyspark,apache-spark-sql,pyspark-dataframes,Pyspark,Apache Spark Sql,Pyspark Dataframes,我有一个pyspark数据帧,如下所示: df = pd.DataFrame({"Date": ["2020-05-10", "2020-05-10", "2020-05-10", "2020-05-11", "2020-05-11", "2020-05-12", ], "Mode": ['A', 'B', 'A', 'C', 'C', 'B']}) df = spark.createDataFrame(df) +----------+----+ | Date|Mode| +---

我有一个pyspark数据帧,如下所示:

df = pd.DataFrame({"Date": ["2020-05-10", "2020-05-10", "2020-05-10", "2020-05-11", "2020-05-11", "2020-05-12", ], "Mode": ['A', 'B', 'A', 'C', 'C', 'B']})

df = spark.createDataFrame(df)

+----------+----+
|      Date|Mode|
+----------+----+
|2020-05-10|   A|
|2020-05-10|   B|
|2020-05-10|   A|
|2020-05-11|   C|
|2020-05-11|   C|
|2020-05-12|   B|
+----------+----+
我想按
日期
分组,并在
模式
列中创建一个新的值集列,如下所示:

df = pd.DataFrame({"Date": ["2020-05-10", "2020-05-10", "2020-05-10", "2020-05-11", "2020-05-11", "2020-05-12", ], "Mode": ['A', 'B', 'A', 'C', 'C', 'B'], "set(Mode)": [['A', 'B'], ['A', 'B'], ['A', 'B'], ['C'], ['C'], ['B']]})

df = spark.createDataFrame(df)

+----------+----+---------+
|      Date|Mode|set(Mode)|
+----------+----+---------+
|2020-05-10|   A|   [A, B]|
|2020-05-10|   B|   [A, B]|
|2020-05-10|   A|   [A, B]|
|2020-05-11|   C|      [C]|
|2020-05-11|   C|      [C]|
|2020-05-12|   B|      [B]|
+----------+----+---------+

您可以在窗口上尝试收集集合:

import pyspark.sql.functions as F

df.withColumn("Set",F.collect_set('Mode')
                     .over(Window.partitionBy("Date"))).orderBy("Date").show()

如果具体顺序很重要:

(df.withColumn("idx",F.monotonically_increasing_id())
   .withColumn("Set",F.collect_set('Mode').over(Window.partitionBy("Date")))
   .orderBy("idx").drop("idx")).show()

+----------+----+------+
|      Date|Mode|   Set|
+----------+----+------+
|2020-05-10|   A|[B, A]|
|2020-05-10|   B|[B, A]|
|2020-05-10|   A|[B, A]|
|2020-05-11|   C|   [C]|
|2020-05-11|   C|   [C]|
|2020-05-12|   B|   [B]|
+----------+----+------+

您可以在窗口上尝试
collect\u set

import pyspark.sql.functions as F

df.withColumn("Set",F.collect_set('Mode')
                     .over(Window.partitionBy("Date"))).orderBy("Date").show()

如果具体顺序很重要:

(df.withColumn("idx",F.monotonically_increasing_id())
   .withColumn("Set",F.collect_set('Mode').over(Window.partitionBy("Date")))
   .orderBy("idx").drop("idx")).show()

+----------+----+------+
|      Date|Mode|   Set|
+----------+----+------+
|2020-05-10|   A|[B, A]|
|2020-05-10|   B|[B, A]|
|2020-05-10|   A|[B, A]|
|2020-05-11|   C|   [C]|
|2020-05-11|   C|   [C]|
|2020-05-12|   B|   [B]|
+----------+----+------+
你可以试试下面的代码

# Import Libraries
import pandas as pd

# Create DataFrame
df = pd.DataFrame({"Date": ["2020-05-10", "2020-05-10", "2020-05-10", "2020-05-11", "2020-05-11", "2020-05-12", ], "Mode": ['A', 'B', 'A', 'C', 'C', 'B']})
df = spark.createDataFrame(df)

# Group By on Date anc collect the values as set using collect_set function.
df1 = df.groupBy("Date").agg(collect_set("Mode"))

# Join the DataFrames to get desired result.
df2 = df.join(df1, "Date")

# Display DataFrame
df2.show()
输出 我希望这有帮助。

您可以尝试下面的代码

# Import Libraries
import pandas as pd

# Create DataFrame
df = pd.DataFrame({"Date": ["2020-05-10", "2020-05-10", "2020-05-10", "2020-05-11", "2020-05-11", "2020-05-12", ], "Mode": ['A', 'B', 'A', 'C', 'C', 'B']})
df = spark.createDataFrame(df)

# Group By on Date anc collect the values as set using collect_set function.
df1 = df.groupBy("Date").agg(collect_set("Mode"))

# Join the DataFrames to get desired result.
df2 = df.join(df1, "Date")

# Display DataFrame
df2.show()
输出 我希望这有帮助