Scala GroupBy+;关键字中包含Case class/Trait的数据集上的自定义聚合

Scala GroupBy+;关键字中包含Case class/Trait的数据集上的自定义聚合,scala,apache-spark,serialization,apache-spark-dataset,Scala,Apache Spark,Serialization,Apache Spark Dataset,我正在尝试重构一些代码,并将一般逻辑转化为特性。我基本上希望处理数据集,通过一些键对它们进行分组并聚合: import org.apache.spark.sql.expressions.Aggregator import org.apache.spark.sql.{ Dataset, Encoder, Encoders, TypedColumn } case class SomeKey(a: String, b: Boolean) case class InputRow( SomeKey,

我正在尝试重构一些代码,并将一般逻辑转化为特性。我基本上希望处理数据集,通过一些键对它们进行分组并聚合:

import org.apache.spark.sql.expressions.Aggregator
import org.apache.spark.sql.{ Dataset, Encoder, Encoders, TypedColumn }

case class SomeKey(a: String, b: Boolean)

case class InputRow(
 SomeKey,
 v: Double
)

trait MyTrait {

  def processInputs: Dataset[InputRow]

  def groupAndAggregate(
    logs: Dataset[InputRow]
  ): Dataset[(SomeKey, Long)] = {
    import logs.sparkSession.implicits._

    logs
      .groupByKey(i => i.key)
      .agg(someAggFunc)

  }
  //Whatever agg function: here, it counts the number of v that are >= 0.5
  def someAggFunc: TypedColumn[InputRow, Long] =
    new Aggregator[
      /*input type*/ InputRow,
      /* "buffer" type */ Long,
      /* output type */ Long
    ] with Serializable {

      def zero = 0L

      def reduce(b: Long, a: InputRow) = {
        if (a.v >= 0.5)
          b + 1
        else
          b
      }

      def merge(b1: Long, b2: Long) =
        b1 + b2

      // map buffer to output type
      def finish(b: Long) = b
      def bufferEncoder: Encoder[Long] = Encoders.scalaLong
      def outputEncoder: Encoder[Long] = Encoders.scalaLong
    }.toColumn
}
一切正常:我可以实例化一个从MyTrait继承的类,并覆盖我处理输入的方式:

import spark.implicits._
case class MyTraitTest(testDf: DataFrame) extends MyTrait {
    override def processInputs: Dataset[InputRow] = {
      val ds = testDf
        .select(
          $"a",
          $"b",
          $"v",
        )
        .rdd
        .map(
          r =>
            InputRow(
              SomeKey(r.getAs[String]("a"), r.getAs[Boolean]("b")),
              r.getAs[Double]("v")
          )
        )
        .toDS
      ds
    }
val df: DataFrame = Seq(
 ("1", false, 0.40),
 ("1", false, 0.54),
 ("0", true, 0.85),
 ("1", true, 0.39)
).toDF("a", "b", "v")

val myTraitTest  = MyTraitTest(df)
val ds: Dataset[InputRow] = myTraitTest.processInputs
val res                   = myTraitTest.groupAndAggregate(ds)
res.show(false)

+----------+----------------------------------+
|key       |InputRow                          |
+----------+----------------------------------+
|[1, false]|1                                 |
|[0, true] |1                                 |
|[1, true] |0                                 |
+----------+----------------------------------+
现在的问题是:我想让SomeKey从一个更通用的trait Key派生出来,因为这个Key并不总是只有两个字段,字段也不会有相同的类型等等。不过它总是一些基本基本类型的简单元组

所以我试着做了以下几点:

trait Key extends Product
case class SomeKey(a: String, b: Boolean) extends Key
case class SomeOtherKey(x: Int, y: Boolean, z: String) extends Key

case class InputRow[T <: Key](
   key: T,
   v: Double
)

trait MyTrait[T <: Key] {

  def processInputs: Dataset[InputRow[T]]

  def groupAndAggregate(
    logs: Dataset[InputRow[T]]
  ): Dataset[(T, Long)] = {
    import logs.sparkSession.implicits._

    logs
      .groupByKey(i => i.key)
      .agg(someAggFunc)

  }

  def someAggFunc: TypedColumn[InputRow[T], Long] = {...}
等等

但现在我得到了错误:
无法找到类型T的编码器。需要隐式编码器[T]在数据集中存储T实例。导入spark.implicits支持基元类型(Int、String等)和产品类型(case类)。在将来的版本中将添加对序列化其他类型的支持。
.groupByKey(i=>i.key)


我真的不知道如何解决这个问题,我尝试了很多事情都没有成功。很抱歉这么长的描述,但希望你有所有的元素来帮助我理解。。。谢谢

Spark需要能够为产品类型T隐式创建编码器,因此您需要帮助它解决JVM类型擦除问题,并将T的TypeTag作为groupAndAggregate方法的隐式参数传递

一个有效的例子:

import org.apache.spark.sql.expressions.Aggregator
import org.apache.spark.sql.{ DataFrame, Dataset, Encoders, TypedColumn }
import scala.reflect.runtime.universe.TypeTag

trait Key extends Product
case class SomeKey(a: String, b: Boolean) extends Key
case class SomeOtherKey(x: Int, y: Boolean, z: String) extends Key

case class InputRow[T <: Key](key: T, v: Double)

trait MyTrait[T <: Key] {

  def processInputs: Dataset[InputRow[T]]

  def groupAndAggregate(
    logs: Dataset[InputRow[T]]
  )(implicit tTypeTag: TypeTag[T]): Dataset[(T, Long)] = {
    import logs.sparkSession.implicits._

    logs
      .groupByKey(i => i.key)
      .agg(someAggFunc)
  }

  def someAggFunc: TypedColumn[InputRow[T], Long] =
    new Aggregator[InputRow[T], Long, Long] with Serializable {

      def reduce(b: Long, a: InputRow[T]) = b + (a.v * 100).toLong

      def merge(b1: Long, b2: Long) = b1 + b2

      def zero = 0L
      def finish(b: Long) = b      
      def bufferEncoder = Encoders.scalaLong
      def outputEncoder = Encoders.scalaLong
    }.toColumn
}
以及测试执行

val df = Seq(
 ("1", false, 0.40),
 ("1", false, 0.54),
 ("0", true, 0.85),
 ("1", true, 0.39)
).toDF("a", "b", "v")

val myTraitTest  = MyTraitTest(df)
val ds = myTraitTest.processInputs
val res = myTraitTest.groupAndAggregate(ds)
res.show(false)

+----------+-----------------------------------------------+
|key       |$anon$1($line5460910223.$read$$iw$$iw$InputRow)|
+----------+-----------------------------------------------+
|[1, false]|94                                             |
|[1, true] |39                                             |
|[0, true] |85                                             |
+----------+-----------------------------------------------+

Spark需要能够隐式地为产品类型T创建编码器,因此您需要帮助它解决JVM类型擦除问题,并将T的类型标记作为groupAndAggregate方法的隐式参数传递

一个有效的例子:

import org.apache.spark.sql.expressions.Aggregator
import org.apache.spark.sql.{ DataFrame, Dataset, Encoders, TypedColumn }
import scala.reflect.runtime.universe.TypeTag

trait Key extends Product
case class SomeKey(a: String, b: Boolean) extends Key
case class SomeOtherKey(x: Int, y: Boolean, z: String) extends Key

case class InputRow[T <: Key](key: T, v: Double)

trait MyTrait[T <: Key] {

  def processInputs: Dataset[InputRow[T]]

  def groupAndAggregate(
    logs: Dataset[InputRow[T]]
  )(implicit tTypeTag: TypeTag[T]): Dataset[(T, Long)] = {
    import logs.sparkSession.implicits._

    logs
      .groupByKey(i => i.key)
      .agg(someAggFunc)
  }

  def someAggFunc: TypedColumn[InputRow[T], Long] =
    new Aggregator[InputRow[T], Long, Long] with Serializable {

      def reduce(b: Long, a: InputRow[T]) = b + (a.v * 100).toLong

      def merge(b1: Long, b2: Long) = b1 + b2

      def zero = 0L
      def finish(b: Long) = b      
      def bufferEncoder = Encoders.scalaLong
      def outputEncoder = Encoders.scalaLong
    }.toColumn
}
以及测试执行

val df = Seq(
 ("1", false, 0.40),
 ("1", false, 0.54),
 ("0", true, 0.85),
 ("1", true, 0.39)
).toDF("a", "b", "v")

val myTraitTest  = MyTraitTest(df)
val ds = myTraitTest.processInputs
val res = myTraitTest.groupAndAggregate(ds)
res.show(false)

+----------+-----------------------------------------------+
|key       |$anon$1($line5460910223.$read$$iw$$iw$InputRow)|
+----------+-----------------------------------------------+
|[1, false]|94                                             |
|[1, true] |39                                             |
|[0, true] |85                                             |
+----------+-----------------------------------------------+