Scala 如何定义自定义聚合函数来对向量列求和?

Scala 如何定义自定义聚合函数来对向量列求和?,scala,apache-spark,apache-spark-sql,aggregate-functions,apache-spark-ml,Scala,Apache Spark,Apache Spark Sql,Aggregate Functions,Apache Spark Ml,我有一个两列的数据框架,ID类型为Int和Vec类型为Vector(org.apache.spark.mllib.linalg.Vector) 数据框如下所示: ID,Vec 1,[0,0,5] 1,[4,0,1] 1,[1,2,1] 2,[7,5,0] 2,[3,3,4] 3,[0,8,1] 3,[0,0,1] 3,[7,7,7] .... 我想做一个groupBy($“ID”)然后通过向量求和对每个组内的行应用聚合 上述示例的期望输出为: ID,SumOfVectors 1,[5,2,7]

我有一个两列的数据框架,
ID
类型为
Int
Vec
类型为
Vector
org.apache.spark.mllib.linalg.Vector

数据框如下所示:

ID,Vec
1,[0,0,5]
1,[4,0,1]
1,[1,2,1]
2,[7,5,0]
2,[3,3,4]
3,[0,8,1]
3,[0,0,1]
3,[7,7,7]
....
我想做一个
groupBy($“ID”)
然后通过向量求和对每个组内的行应用聚合

上述示例的期望输出为:

ID,SumOfVectors
1,[5,2,7]
2,[10,8,4]
3,[7,15,9]
...
可用的聚合函数将不起作用,例如,
df.groupBy($“ID”).agg(sum($“Vec”)
将导致ClassCastException


如何实现一个自定义聚合函数,允许我进行向量或数组的求和或任何其他自定义操作?

Spark>=3.0

您可以将
Summarizer
sum

import org.apache.spark.ml.stat.Summarizer
df
.groupBy($“id”)
.agg(Summarizer.sum($“vec”).alias(“vec”))
Spark我建议如下(适用于Spark 2.0.2以后的版本),它可能会被优化,但它非常好,您必须事先知道一件事,即创建UDAF实例时的向量大小

import org.apache.spark.ml.linalg._
import org.apache.spark.mllib.linalg.WeightedSparseVector
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._

class VectorAggregate(val numFeatures: Int)
   extends UserDefinedAggregateFunction {

private type B = Map[Int, Double]

def inputSchema: StructType = StructType(StructField("vec", new VectorUDT()) :: Nil)

def bufferSchema: StructType =
StructType(StructField("agg", MapType(IntegerType, DoubleType)) :: Nil)

def initialize(buffer: MutableAggregationBuffer): Unit =
buffer.update(0, Map.empty[Int, Double])

def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
    val zero = buffer.getAs[B](0)
    input match {
        case Row(DenseVector(values)) => buffer.update(0, values.zipWithIndex.foldLeft(zero){case (acc,(v,i)) => acc.updated(i, v + acc.getOrElse(i,0d))})
        case Row(SparseVector(_, indices, values)) => buffer.update(0, values.zip(indices).foldLeft(zero){case (acc,(v,i)) => acc.updated(i, v + acc.getOrElse(i,0d))}) }}
def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
val zero = buffer1.getAs[B](0)
buffer1.update(0, buffer2.getAs[B](0).foldLeft(zero){case (acc,(i,v)) => acc.updated(i, v + acc.getOrElse(i,0d))})}

def deterministic: Boolean = true

def evaluate(buffer: Row): Any = {
    val Row(agg: B) = buffer
    val indices = agg.keys.toArray.sorted
    Vectors.sparse(numFeatures,indices,indices.map(agg)).compressed
}

def dataType: DataType = new VectorUDT()
}

如果有人试图在pyspark中做类似的事情,语法可能是重复的:我看到诀窍是使用breeze.linalg.DensVector,为什么它工作,而mllib.linalg的密集向量不工作?问题是除了UDAF之外,Scala版本的
mllib.linalg.Vector
@olies没有
+
方法?你可以解构内部数组,单独聚合,然后重新创建。但是如果你问起“开箱即用”解决方案,我没有发现任何解决方案。@zero323我现在正在Sark 2.0上尝试这个方法,我正在将向量传递给一个规范化器,但没有成功,我得到了:org.apache.spark.mllib.linalg.DenseVector不能转换为org.apache.spark.ml.linalg.Vector spark 2.0有更新吗?@Rami您需要
o.a.s.ml.linalg
导入。
import org.apache.spark.sql.expressions.{MutableAggregationBuffer,
  UserDefinedAggregateFunction}
import org.apache.spark.ml.linalg.{Vector, Vectors, SQLDataTypes}
import org.apache.spark.sql.types.{StructType, ArrayType, DoubleType}
import org.apache.spark.sql.Row
import scala.collection.mutable.WrappedArray
class VectorSum (n: Int) extends UserDefinedAggregateFunction {
    def inputSchema = new StructType().add("v", SQLDataTypes.VectorType)
    def bufferSchema = new StructType().add("buff", ArrayType(DoubleType))
    def dataType = SQLDataTypes.VectorType
    def deterministic = true 

    def initialize(buffer: MutableAggregationBuffer) = {
      buffer.update(0, Array.fill(n)(0.0))
    }

    def update(buffer: MutableAggregationBuffer, input: Row) = {
      if (!input.isNullAt(0)) {
        val buff = buffer.getAs[WrappedArray[Double]](0) 
        val v = input.getAs[Vector](0).toSparse
        for (i <- v.indices) {
          buff(i) += v(i)
        }
        buffer.update(0, buff)
      }
    }

    def merge(buffer1: MutableAggregationBuffer, buffer2: Row) = {
      val buff1 = buffer1.getAs[WrappedArray[Double]](0) 
      val buff2 = buffer2.getAs[WrappedArray[Double]](0) 
      for ((x, i) <- buff2.zipWithIndex) {
        buff1(i) += x
      }
      buffer1.update(0, buff1)
    }

    def evaluate(buffer: Row) =  Vectors.dense(
      buffer.getAs[Seq[Double]](0).toArray)
} 
df.groupBy($"id").agg(new VectorSum(3)($"vec") alias "vec").show

// +---+--------------+
// | id|           vec|
// +---+--------------+
// |  1| [5.0,2.0,7.0]|
// |  2|[10.0,8.0,4.0]|
// |  3|[7.0,15.0,9.0]|
// +---+--------------+
import org.apache.spark.ml.linalg._
import org.apache.spark.mllib.linalg.WeightedSparseVector
import org.apache.spark.sql.expressions.{MutableAggregationBuffer, UserDefinedAggregateFunction}
import org.apache.spark.sql.types._

class VectorAggregate(val numFeatures: Int)
   extends UserDefinedAggregateFunction {

private type B = Map[Int, Double]

def inputSchema: StructType = StructType(StructField("vec", new VectorUDT()) :: Nil)

def bufferSchema: StructType =
StructType(StructField("agg", MapType(IntegerType, DoubleType)) :: Nil)

def initialize(buffer: MutableAggregationBuffer): Unit =
buffer.update(0, Map.empty[Int, Double])

def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
    val zero = buffer.getAs[B](0)
    input match {
        case Row(DenseVector(values)) => buffer.update(0, values.zipWithIndex.foldLeft(zero){case (acc,(v,i)) => acc.updated(i, v + acc.getOrElse(i,0d))})
        case Row(SparseVector(_, indices, values)) => buffer.update(0, values.zip(indices).foldLeft(zero){case (acc,(v,i)) => acc.updated(i, v + acc.getOrElse(i,0d))}) }}
def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
val zero = buffer1.getAs[B](0)
buffer1.update(0, buffer2.getAs[B](0).foldLeft(zero){case (acc,(i,v)) => acc.updated(i, v + acc.getOrElse(i,0d))})}

def deterministic: Boolean = true

def evaluate(buffer: Row): Any = {
    val Row(agg: B) = buffer
    val indices = agg.keys.toArray.sorted
    Vectors.sparse(numFeatures,indices,indices.map(agg)).compressed
}

def dataType: DataType = new VectorUDT()
}