Scala 使用groupBy选择出现次数最多的值

Scala 使用groupBy选择出现次数最多的值,scala,dataframe,apache-spark,Scala,Dataframe,Apache Spark,输入示例: Artist Skill 1. Bono Vocals 2. Bono Vocals 3. Bono Vocals 4. Bono Guitar 5. Edge Vocals 6. Edge Guitar 7. Edge Guitar 8. Edge Guitar 9. Edge Bass 10. Larry Drum 11. Larry Drum 12. Larry Guitar 13. Clayton Bass 14. Clayton Bass 15. Cla

输入示例:

Artist Skill
1. Bono Vocals
2. Bono Vocals
3. Bono Vocals
4. Bono Guitar
5. Edge Vocals
6. Edge Guitar
7. Edge Guitar
8. Edge Guitar
9. Edge     Bass
10. Larry   Drum
11. Larry   Drum
12. Larry   Guitar
13. Clayton Bass
14. Clayton Bass
15. Clayton Guitar
相应输出

艺术家最常见的技能

  1. Bono Vocals Edge Guitar Larry Drum Clayton Bass

我有一个数据帧,我想使用scala创建一个确定性代码,为每个不同的“艺术家”生成一行新的数据帧,并为相应的艺术家生成最常见的“技能”。

您可以组合
groupBy
window
功能,如下所示

val window = Window.partitionBy("Artist").orderBy($"count".desc)
df.groupBy("Artist", "Skill")
  .agg(count("Skill").as("count")). // gives you count of artist and skill
  //select the first row with adding rownumber 
  .withColumn("rn", row_number over window).where($"rn" === 1 ) 
  .drop("rn", "count")
  .show(false)
输出:

+-------+------+
|Artist |Skill |
+-------+------+
|Clayton|Bass  |
|Larry  |Drum  |
|Edge   |Guitar|
|Bono   |Vocals|
+-------+------+

到目前为止你试过什么?还有什么对您不起作用呢?我尝试了这行代码df.groupBy(“艺术家”).max().show(),但不幸的是,它没有给出预期的结果,我对scala和dataframe还是新手