Scala 如何将groupBy().count()添加到源数据帧?

Scala 如何将groupBy().count()添加到源数据帧?,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,我有以下数据帧: +---------------+--------------+--------------+-----+ | column0| column1| column2|label| +---------------+--------------+--------------+-----+ |05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001| 2| |05:49:56.604908| 10.0.0.

我有以下数据帧:

+---------------+--------------+--------------+-----+
|        column0|       column1|       column2|label|
+---------------+--------------+--------------+-----+
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|
+---------------+--------------+--------------+-----+
我想应用groupBy并依靠它,得到以下结果:

+--------------+--------------+-----+
|       column1|       column2|count|
+--------------+--------------+-----+
|10.0.0.2.54880| 10.0.0.3.5001|   19|
| 10.0.0.3.5001|10.0.0.2.54880|   10|
+--------------+--------------+-----+
我知道我必须使用这个:

dataFrame_Train.groupBy("column1", "column2").count().show()
但是问题是,我需要将“count”列作为永久列添加到我的数据帧中。 在上述情况下,如果在
groupBy
之后使用
dataFrame\u Train.show()
,我会看到第一个没有“count”列的数据帧。此代码:

dataFrame_Train.groupBy("column1", "column2").count().show()
`dataFrame_Train.show()`

您能帮我将
groupBy(“column1”、“column2”).count()添加到数据框中吗?(因为我以后需要使用“计数”列来训练数据)谢谢。

我们将使用您以
csv
格式提供的相同数据

让我们阅读这些数据:

scala> val df = spark.read.format("csv").load("data.txt").toDF("column0","column1","column2","label")
// df: org.apache.spark.sql.DataFrame = [column0: string, column1: string ... 2 more fields]
现在,我们可以通过聚合执行分组:

scala> val df2 = df.groupBy("column1","column2").count
df2: org.apache.spark.sql.DataFrame = [column1: string, column2: string ... 1 more field]
我们所需要做的就是在执行“按键分组”的相同列上进行等联接:

scala> val df3 = df.join(df2, Seq("column1","column2"))
df3: org.apache.spark.sql.DataFrame = [column1: string, column2: string ... 3 more fields]

scala> df3.show
+--------------+--------------+---------------+-----+-----+                     
|       column1|       column2|        column0|label|count|
+--------------+--------------+---------------+-----+-----+
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604899|    2|   13|
| 10.0.0.3.5001|10.0.0.2.54880|05:49:56.604908|    2|    7|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604900|    2|   13|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604899|    2|   13|
| 10.0.0.3.5001|10.0.0.2.54880|05:49:56.604908|    2|    7|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604900|    2|   13|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604899|    2|   13|
| 10.0.0.3.5001|10.0.0.2.54880|05:49:56.604908|    2|    7|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604900|    2|   13|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604899|    2|   13|
| 10.0.0.3.5001|10.0.0.2.54880|05:49:56.604908|    2|    7|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604900|    2|   13|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604899|    2|   13|
| 10.0.0.3.5001|10.0.0.2.54880|05:49:56.604908|    2|    7|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604900|    2|   13|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604899|    2|   13|
| 10.0.0.3.5001|10.0.0.2.54880|05:49:56.604908|    2|    7|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604900|    2|   13|
|10.0.0.2.54880| 10.0.0.3.5001|05:49:56.604899|    2|   13|
| 10.0.0.3.5001|10.0.0.2.54880|05:49:56.604908|    2|    7|
+--------------+--------------+---------------+-----+-----+

@eliasah的答案很好,但可能不是最有效的、代码和性能方面的

窗口聚合函数(也称为窗口聚合) 每当您看到需要
groupBy
join
时,尤其是对于这样一个简单的用例,请考虑窗口聚合函数

groupBy
和窗口聚合的主要区别在于前者最多提供源数据集中的行数,而后者(窗口聚合)提供源数据集中的行数。这似乎完全符合你的要求,不是吗

有了这些,让我们看看代码

import org.apache.spark.sql.expressions.Window
val columns1and2 = Window.partitionBy("column1", "column2") // <-- matches groupBy

import org.apache.spark.sql.functions._
// using count aggregate function over entire partition frame
val counts = ips.withColumn("count", count($"label") over columns1and2)
scala> counts.show
+---------------+--------------+--------------+-----+-----+
|        column0|       column1|       column2|label|count|
+---------------+--------------+--------------+-----+-----+
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
+---------------+--------------+--------------+-----+-----+
import org.apache.spark.sql.expressions.Window
val columns1和2=Window.partitionBy(“column1”、“column2”)//counts.show
+---------------+--------------+--------------+-----+-----+
|第0列|第1列|第2列|标签|计数|
+---------------+--------------+--------------+-----+-----+
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604900|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604899|10.0.0.2.54880| 10.0.0.3.5001|    2|   13|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
|05:49:56.604908| 10.0.0.3.5001|10.0.0.2.54880|    2|    7|
+---------------+--------------+--------------+-----+-----+
完成了!干净、简单。这是我最喜欢的窗口聚合函数

性能比较 现在,有趣的部分来了。这和@eliasah的解决方案之间的区别仅仅是语法上的吗?我不这么认为(但我仍在学习如何得出正确的结论)。看看执行计划,判断自己

下面是窗口聚合的执行计划

然而,以下是
groupBy
join
的执行计划(我不得不截取两个屏幕,因为计划太大,无法包含在一个屏幕中)

作业智能
groupBy
join
query轻松击败窗口聚合,前者2个Spark作业,后者5个


操作员方面,他们的数量和最重要的交换(即Spark SQL的洗牌)、窗口聚合可能已经击败了
groupBy
join

作业方面和操作员方面都很有趣,但在墙上时钟方面,什么更快?