有效计算PySpark GroupedData(非scala)上的前k个元素

有效计算PySpark GroupedData(非scala)上的前k个元素,pyspark,pyspark-dataframes,Pyspark,Pyspark Dataframes,我有一个如下形式的数据框: +---+---+----+ | A| B|dist| +---+---+----+ | a1| b1| 1.0| | a1| b2| 2.0| | a2| b1|10.0| | a2| b2|10.0| | a2| b3| 2.0| | a3| b1|10.0| +---+---+----+ 固定了max_秩=2,我想得到下面的一个 +---+---+----+----+ | A| B|dist|rank| +---+---+----+----+ | a3

我有一个如下形式的数据框:

+---+---+----+
|  A|  B|dist|
+---+---+----+
| a1| b1| 1.0|
| a1| b2| 2.0|
| a2| b1|10.0|
| a2| b2|10.0|
| a2| b3| 2.0|
| a3| b1|10.0|
+---+---+----+
固定了max_秩=2,我想得到下面的一个

+---+---+----+----+
|  A|  B|dist|rank|
+---+---+----+----+
| a3| b1|10.0|   1|
| a2| b3| 2.0|   1|
| a2| b1|10.0|   2|
| a2| b2|10.0|   2|
| a1| b1| 1.0|   1|
| a1| b2| 2.0|   2|
+---+---+----+----+
实现这一点的经典方法如下

df = sqlContext.createDataFrame([("a1", "b1", 1.), ("a1", "b2", 2.), ("a2", "b1", 10.), ("a2", "b2", 10.), ("a2", "b3", 2.), ("a3", "b1", 10.)], schema=StructType([StructField("A", StringType(), True), StructField("B", StringType(), True),StructField("dist", FloatType(), True)]))
win=Window().partitionBy(df['A']).orderBy(df['dist'])
out=df.withColumn('rank',rank().over(win))
out=out.filter('rank
win = Window().partitionBy(df['A']).orderBy(df['dist'])
out = df.withColumn('rank', rank().over(win))
out = out.filter('rank<=2')