Apache spark 性能更好的Pypark pivot替代方案

Apache spark 性能更好的Pypark pivot替代方案,apache-spark,pyspark,pivot,databricks,crosstab,Apache Spark,Pyspark,Pivot,Databricks,Crosstab,以下是我的输入数据集: df = spark.createDataFrame([ \ ("0","CattyCat","B2K","B"), \ ("0","CattyCat","B3L","I"), \ ("0","CattyCat","B3U",&qu

以下是我的输入数据集:

df = spark.createDataFrame([ \
    ("0","CattyCat","B2K","B"), \
    ("0","CattyCat","B3L","I"), \
    ("0","CattyCat","B3U","I"), \
    ("0","CattyCat","D3J","C"), \
    ("0","CattyCat","J1N","H"), \
    ("0","CattyCat","K7A","I"), \
    ("0","CattyCat","L1B","D"), \
    ("0","CattyCat","U3F","B"), \
    ("1","CattyCat","B2K","I"), \
    ("1","CattyCat","B3L","I"), \
    ("1","CattyCat","B3U","I"), \
    ("1","CattyCat","D3J","C"), \
    ("1","CattyCat","J1N","H"), \
    ("1","CattyCat","K7A","I"), \
    ("1","CattyCat","L1B","D"), \
    ("1","CattyCat","U3F","B"), \
    ("2","CattyCat","B2K","B"), \
    ("2","CattyCat","B3L","B"), \
    ("2","CattyCat","B3U","I"), \
    ("2","CattyCat","D3J","C"), \
    ("2","CattyCat","J1N","H"), \
    ("2","CattyCat","K7A","I"), \
    ("2","CattyCat","L1B","D"), \
    ("2","CattyCat","U3F","B"), \
], ["RowCount","CatName","Name","Value"])

df.show(30)

+--------+--------+----+-----+
|RowCount| CatName|Name|Value|
+--------+--------+----+-----+
|       0|CattyCat| B2K|    B|
|       0|CattyCat| B3L|    I|
|       0|CattyCat| B3U|    I|
|       0|CattyCat| D3J|    C|
|       0|CattyCat| J1N|    H|
|       0|CattyCat| K7A|    I|
|       0|CattyCat| L1B|    D|
|       0|CattyCat| U3F|    B|
|       1|CattyCat| B2K|    I|
|       1|CattyCat| B3L|    I|
|       1|CattyCat| B3U|    I|
|       1|CattyCat| D3J|    C|
|       1|CattyCat| J1N|    H|
|       1|CattyCat| K7A|    I|
|       1|CattyCat| L1B|    D|
|       1|CattyCat| U3F|    B|
|       2|CattyCat| B2K|    B|
|       2|CattyCat| B3L|    B|
|       2|CattyCat| B3U|    I|
|       2|CattyCat| D3J|    C|
|       2|CattyCat| J1N|    H|
|       2|CattyCat| K7A|    I|
|       2|CattyCat| L1B|    D|
|       2|CattyCat| U3F|    B|
+--------+--------+----+-----+
我的目标是对这些数据进行透视/交叉制表。我可以使用groupby.pivot.agg实现这一点,如下所示:

import pyspark.sql.functions as F
display(df.groupBy("RowCount","CatName").pivot("Name").agg(F.first("value")))

+----------+----------+-----+-----+-----+-----+-----+-----+-----+-----+
| RowCount | CatName  | B2K | B3L | B3U | D3J | J1N | K7A | L1B | U3F |
+----------+----------+-----+-----+-----+-----+-----+-----+-----+-----+
| 0        | CattyCat | B   | I   | I   | C   | H   | I   | D   | B   |
+----------+----------+-----+-----+-----+-----+-----+-----+-----+-----+
| 1        | CattyCat | I   | I   | I   | C   | H   | I   | D   | B   |
+----------+----------+-----+-----+-----+-----+-----+-----+-----+-----+
| 2        | CattyCat | B   | B   | I   | C   | H   | I   | D   | B   |
+----------+----------+-----+-----+-----+-----+-----+-----+-----+-----+
但我面临的问题是,当数据集很大(1亿)时,性能非常差。(单个执行者最后阶段的单个任务,停留数小时) 另外,我还发现pivot还可以使用第二个参数,即一系列可能提供更好性能的列名。但不幸的是,我无法事先知道这些列名


有没有一种方法可以更好地完成这个“交叉标签”?

你可以查看这篇文章,而不必使用
pivot
-。。它在scalaThanks将检查它。必须尝试将其转换为pysparkYou可以检查此帖子-无需
透视
-。。它在scalaThanks将检查它。必须尝试将其转换为pyspark