Scala 计算每组的最大观察次数

Scala 计算每组的最大观察次数,scala,apache-spark,apache-spark-1.6,Scala,Apache Spark,Apache Spark 1.6,我使用Spark1.6.2 我需要找到每组的最大计数 val myData = Seq(("aa1", "GROUP_A", "10"),("aa1","GROUP_A", "12"),("aa2","GROUP_A", "12"),("aa3", "GROUP_B", "14"),("aa3","GROUP_B", "11"),("aa3","GROUP_B","12" ),("aa2", "GROUP_B", "12")) val df = sc.parallelize(myData).t

我使用Spark1.6.2

我需要找到每组的最大计数

val myData = Seq(("aa1", "GROUP_A", "10"),("aa1","GROUP_A", "12"),("aa2","GROUP_A", "12"),("aa3", "GROUP_B", "14"),("aa3","GROUP_B", "11"),("aa3","GROUP_B","12" ),("aa2", "GROUP_B", "12"))

val df = sc.parallelize(myData).toDF("id","type","activity")
让我们首先计算每组的观察次数:

df.groupBy("type","id").count.show

+-------+---+-----+
|   type| id|count|
+-------+---+-----+
|GROUP_A|aa1|    2|
|GROUP_A|aa2|    1|
|GROUP_B|aa2|    1|
|GROUP_B|aa3|    3|
+-------+---+-----+
这是预期的结果:

+--------+----+-----+
|type    |  id|count|
+----+--------+-----+
| GROUP_A| aa1|    2|
| GROUP_B| aa3|    3|
+--------+----+-----+
df.groupBy("id","type").count()                // get count per id and type
  .groupBy("type")                             // now group by type only
  .agg(max(struct("count", "id")) as "struct") // get maximum of (count, id) structs - since count is first, and id is unique - count will decide the ordering
  .select($"type", $"struct.id" as "id", $"struct.count" as "count") // "unwrap" structs
  .show()

// +-------+---+-----+
// |   type| id|count|
// +-------+---+-----+
// |GROUP_A|aa1|    2|
// |GROUP_B|aa3|    3|
// +-------+---+-----+
我试过这个,但不起作用:

df.groupBy("type","id").count.filter("count = 'max'").show

分组后可以使用max函数

val myData = Seq(("aa1", "GROUP_A", "10"),("aa1","GROUP_A", "12"),("aa2","GROUP_A", "12"),("aa3", "GROUP_B", "14"),("aa3","GROUP_B", "11"),("aa3","GROUP_B","12" ),("aa2", "GROUP_B", "12"))

val df = sc.parallelize(myData).toDF("id","type","activity")
//groupby之后是count,然后是count字段的别名,然后在cnt字段中找到最大值

val newDF = df1.groupBy("type", "id").agg(count("*").alias("cnt"))

val df1 = newDF.groupBy("type").max("cnt").show
现在您可以连接这两个数据帧以获得输出

df1.join(newDF.as("newDF"), col("cnt") === col("max(cnt)")).select($"newDF.*").show

分组后可以使用max函数

val myData = Seq(("aa1", "GROUP_A", "10"),("aa1","GROUP_A", "12"),("aa2","GROUP_A", "12"),("aa3", "GROUP_B", "14"),("aa3","GROUP_B", "11"),("aa3","GROUP_B","12" ),("aa2", "GROUP_B", "12"))

val df = sc.parallelize(myData).toDF("id","type","activity")
//groupby之后是count,然后是count字段的别名,然后在cnt字段中找到最大值

val newDF = df1.groupBy("type", "id").agg(count("*").alias("cnt"))

val df1 = newDF.groupBy("type").max("cnt").show
现在您可以连接这两个数据帧以获得输出

df1.join(newDF.as("newDF"), col("cnt") === col("max(cnt)")).select($"newDF.*").show
要获得“列X最大值的行”(不仅仅是最大值),您可以使用这个小技巧将相关列“分组”到一个
struct
,其中包含排序列作为第一列,然后计算该结构的
max
。由于
struct
的顺序由其第一列的顺序“支配”,因此我们将得到所需的结果:

+--------+----+-----+
|type    |  id|count|
+----+--------+-----+
| GROUP_A| aa1|    2|
| GROUP_B| aa3|    3|
+--------+----+-----+
df.groupBy("id","type").count()                // get count per id and type
  .groupBy("type")                             // now group by type only
  .agg(max(struct("count", "id")) as "struct") // get maximum of (count, id) structs - since count is first, and id is unique - count will decide the ordering
  .select($"type", $"struct.id" as "id", $"struct.count" as "count") // "unwrap" structs
  .show()

// +-------+---+-----+
// |   type| id|count|
// +-------+---+-----+
// |GROUP_A|aa1|    2|
// |GROUP_B|aa3|    3|
// +-------+---+-----+
要获得“列X最大值的行”(不仅仅是最大值),您可以使用这个小技巧将相关列“分组”到一个
struct
,其中包含排序列作为第一列,然后计算该结构的
max
。由于
struct
的顺序由其第一列的顺序“支配”,因此我们将得到所需的结果:

+--------+----+-----+
|type    |  id|count|
+----+--------+-----+
| GROUP_A| aa1|    2|
| GROUP_B| aa3|    3|
+--------+----+-----+
df.groupBy("id","type").count()                // get count per id and type
  .groupBy("type")                             // now group by type only
  .agg(max(struct("count", "id")) as "struct") // get maximum of (count, id) structs - since count is first, and id is unique - count will decide the ordering
  .select($"type", $"struct.id" as "id", $"struct.count" as "count") // "unwrap" structs
  .show()

// +-------+---+-----+
// |   type| id|count|
// +-------+---+-----+
// |GROUP_A|aa1|    2|
// |GROUP_B|aa3|    3|
// +-------+---+-----+

您可以使用
窗口
功能查找
最大值
,并通过组合上面@Tzach的答案删除重复的

val windowSpec = Window.partitionBy(col("type"))
import org.apache.spark.sql.functions._
df.groupBy("type","id").count()
  .withColumn("count", max(struct("count", "id")).over(windowSpec))
  .dropDuplicates("type")
  .select($"type", $"count.id" as "id", $"count.count" as "count").show

谢谢

您可以使用
窗口
功能来查找
最大值
,并通过组合上面@Tzach的答案来删除重复的

val windowSpec = Window.partitionBy(col("type"))
import org.apache.spark.sql.functions._
df.groupBy("type","id").count()
  .withColumn("count", max(struct("count", "id")).over(windowSpec))
  .dropDuplicates("type")
  .select($"type", $"count.id" as "id", $"count.count" as "count").show

谢谢

您的“预期结果”与您所描述的似乎不相关:如果您想要“每组的最大计数”,A组应得到12,B组应得到14,不是吗?您希望看到哪个
id
——与最大
计数匹配的记录?请澄清。@TzachZohar:不,我没有搜索最大值。我想首先计算每组的观察次数,然后选择每组的最大
count
。请查看我的更新线程。我一步一步地解释。为了避免任何混淆,我将初始列重命名为
.toDF(“id”、“type”、“activity”)
您的“预期结果”与您似乎描述的内容不相关:如果您想要“每个组的最大计数”,组
A
应该得到12,组
B
应该得到14,不是吗?您希望看到哪个
id
——与最大
计数匹配的记录?请澄清。@TzachZohar:不,我没有搜索最大值。我想首先计算每组的观察次数,然后选择每组的最大
count
。请查看我的更新线程。我一步一步地解释。为了避免任何混淆,我将初始列重命名为
.toDF(“id”,“type”,“activity”)
你能检查我更新的线程吗?我在那里给出了示例和预期结果。@Digoraius,你可以像上面一样使用
窗口函数
你能检查我更新的线程吗?我在那里给出了示例和预期结果。@Digoraius,您可以使用
窗口功能
如上所述,我已经更新了答案。我之前弄错了,因为你的数据框中有计数字段。谢谢。
id
列未出现在
newDF
中。您可以将两个具有匹配计数的数据帧连接起来并获得输出。我已更新了答案。我之前弄错了,因为你的数据框中有计数字段。谢谢。列
id
未出现在
newDF
中。您可以将两个具有匹配计数的数据帧连接起来并获得输出。