Scala 星火标轴
我有一个像这样的dfScala 星火标轴,scala,apache-spark,Scala,Apache Spark,我有一个像这样的df +---+-----+-----+----+ | M|M_Max|Sales|Rank| +---+-----+-----+----+ | M1| 100| 200| 1| | M1| 100| 175| 2| | M1| 101| 150| 3| | M1| 100| 125| 4| | M1| 100| 90| 5| | M1| 100| 85| 6| | M2| 200| 1001| 1| | M2| 20
+---+-----+-----+----+
| M|M_Max|Sales|Rank|
+---+-----+-----+----+
| M1| 100| 200| 1|
| M1| 100| 175| 2|
| M1| 101| 150| 3|
| M1| 100| 125| 4|
| M1| 100| 90| 5|
| M1| 100| 85| 6|
| M2| 200| 1001| 1|
| M2| 200| 500| 2|
| M2| 201| 456| 3|
| M2| 200| 345| 4|
| M2| 200| 231| 5|
| M2| 200| 123| 6|
+---+-----+-----+----+
df.groupBy("M").pivot("Rank").agg(first("Sales")).show
+---+----+---+---+---+---+---+
| M| 1| 2| 3| 4| 5| 6|
+---+----+---+---+---+---+---+
| M1| 200|175|150|125| 90| 85|
| M2|1001|500|456|345|231|123|
+---+----+---+---+---+---+---+
我在这个df上面做一个枢轴操作,就像这样
+---+-----+-----+----+
| M|M_Max|Sales|Rank|
+---+-----+-----+----+
| M1| 100| 200| 1|
| M1| 100| 175| 2|
| M1| 101| 150| 3|
| M1| 100| 125| 4|
| M1| 100| 90| 5|
| M1| 100| 85| 6|
| M2| 200| 1001| 1|
| M2| 200| 500| 2|
| M2| 201| 456| 3|
| M2| 200| 345| 4|
| M2| 200| 231| 5|
| M2| 200| 123| 6|
+---+-----+-----+----+
df.groupBy("M").pivot("Rank").agg(first("Sales")).show
+---+----+---+---+---+---+---+
| M| 1| 2| 3| 4| 5| 6|
+---+----+---+---+---+---+---+
| M1| 200|175|150|125| 90| 85|
| M2|1001|500|456|345|231|123|
+---+----+---+---+---+---+---+
但我的预期输出如下所示。这意味着我需要在输出中获取列-Max(M_Max)
这里M_Max是-M_Max列的最大值。我的预期输出如下所示。在不使用df联接的情况下,使用Pivot函数是否可以实现这一点
+---+----+---+---+---+---+---+-----+
| M| 1| 2| 3| 4| 5| 6|M_Max|
+---+----+---+---+---+---+---+-----+
| M1| 200|175|150|125| 90| 85| 101|
| M2|1001|500|456|345|231|123| 201|
+---+----+---+---+---+---+---+-----+
诀窍是应用窗口函数。解决方案如下:
scala> val df = Seq(
| | ("M1",100,200,1),
| | ("M1",100,175,2),
| | ("M1",101,150,3),
| | ("M1",100,125,4),
| | ("M1",100,90,5),
| | ("M1",100,85,6),
| | ("M2",200,1001,1),
| | ("M2",200,500,2),
| | ("M2",200,456,3),
| | ("M2",200,345,4),
| | ("M2",200,231,5),
| | ("M2",201,123,6)
| | ).toDF("M","M_Max","Sales","Rank")
df: org.apache.spark.sql.DataFrame = [M: string, M_Max: int ... 2 more fields]
scala> import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.expressions.Window
scala> val w = Window.partitionBy("M")
w: org.apache.spark.sql.expressions.WindowSpec = org.apache.spark.sql.expressions.WindowSpec@49b4e11c
scala> df.withColumn("new", max("M_Max") over (w)).groupBy("M", "new").pivot("Rank").agg(first("Sales")).withColumnRenamed("new", "M_Max").show
+---+-----+----+---+---+---+---+---+
| M|M_Max| 1| 2| 3| 4| 5| 6|
+---+-----+----+---+---+---+---+---+
| M1| 101| 200|175|150|125| 90| 85|
| M2| 201|1001|500|456|345|231|123|
+---+-----+----+---+---+---+---+---+
scala> df.show
+---+-----+-----+----+
| M|M_Max|Sales|Rank|
+---+-----+-----+----+
| M1| 100| 200| 1|
| M1| 100| 175| 2|
| M1| 101| 150| 3|
| M1| 100| 125| 4|
| M1| 100| 90| 5|
| M1| 100| 85| 6|
| M2| 200| 1001| 1|
| M2| 200| 500| 2|
| M2| 200| 456| 3|
| M2| 200| 345| 4|
| M2| 200| 231| 5|
| M2| 201| 123| 6|
+---+-----+-----+----+
如果有帮助,请告诉我 基本上,我认为有三种可能的方法
M_max
的最大值,并使用join
(这是您想要避免的)array\u max
聚合结果列val df=Seq(
(M1,100200,1),(M1,100175,2),(M1,101150,3),,
(“M1”,100125,4),(“M1”,10090,5),(“M1”,10085,6),
(M2,2001001,1),(M2,200500,2),(M2,200456,3),,
(“M2”,200345,4),(“M2”,200231,5),(“M2”,201123,6)
).toDF(“M”、“M_Max”、“销售额”、“排名”)
//我们在轴中包含max,因此每个列有一个max列
val df_pivot=df
.groupBy(“M”).pivot(“排名”)
.agg(first(销售)为“first”,max(M_max)为“max”)
val max_cols=df_pivot.columns.filter(_endsWith“max”).map(col)
//然后我们将这些max列聚合为一个
val max\u col=array\u max(array(max\u cols:*)作为“M\u max”
//让我们重命名第一列以匹配预期的输出
val first\u cols=df_pivot.columns.filter(u endsWith“first”)
.map(name=>col(name)作为name.split(“”)(0))
//最后,我们把所有的东西都包在一起
支点
。选择($“M”+:第一列:+最大列:。)
.show(假)
产生
+---+----+---+---+---+---+---+-----+
|M | 1 | 2 | 3 | 4 | 5 | 6 | M|Max|
+---+----+---+---+---+---+---+-----+
|M1 | 200 | 175 | 150 | 125 | 90 | 85 | 101|
|M2 | 1001 | 500 | 456 | 345 | 231 | 123 | 201|
+---+----+---+---+---+---+---+-----+
这和往常一样完美:)…谢谢。