Scala 如何从特定列中具有最大值的数据帧中获取行?
我有一个这样的数据帧Scala 如何从特定列中具有最大值的数据帧中获取行?,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,我有一个这样的数据帧 df.show(5) kv |list1 |list2 |p [k1,v2|[1,2,5,9 |[5,1,7,9,6,3,1,4,9] |0.5 [k1,v3|[1,2,5,8,9|[5,1,7,9,6,3,1,4,15] |0.9 [k2,v2|[77,2,5,9]|[0,1,8,9,7,3,1,4,100]|0.01 [k5,v5|[1,0,5,9 |[5,1,7,9,6,3,1,4,3] |0.3 [k9,v2|[1
df.show(5)
kv |list1 |list2 |p
[k1,v2|[1,2,5,9 |[5,1,7,9,6,3,1,4,9] |0.5
[k1,v3|[1,2,5,8,9|[5,1,7,9,6,3,1,4,15] |0.9
[k2,v2|[77,2,5,9]|[0,1,8,9,7,3,1,4,100]|0.01
[k5,v5|[1,0,5,9 |[5,1,7,9,6,3,1,4,3] |0.3
[k9,v2|[1,2,5,9 |[5,1,7,9,6,3,1,4,200]|2.5
df.count()
5200158
我想得到p最大的那一行,下面这个对我有用,但我不知道是否有其他更干净的方法
val f = df.select(max(struct(
col("pp") +: df.columns.collect { case x if x != "p" => col(x) }: _*
))).first()
只需点餐,然后服用:
import org.apache.spark.sql.functions.desc
df.orderBy(desc("pp")).take(1)
或
您还可以使用窗口函数,如果选择行的逻辑变得更复杂(而不是全局最小值/最大值),这将特别有用:
df.orderBy(desc("pp")).limit(1).first
import org.apache.spark.sql.expressions.Window
df
.withColumn("max_p",max($"p").over(Window.partitionBy()))
.where($"p" === $"max_p")
.drop($"max_p")
.first()