Scala 计算Spark中RDD[Vector]的平均值

Scala 计算Spark中RDD[Vector]的平均值,scala,apache-spark,apache-spark-mllib,Scala,Apache Spark,Apache Spark Mllib,我有一个微风矢量的RDD,想计算它们的平均值。我的第一种方法是使用聚合: import org.apache.spark.{ SparkConf, SparkContext } import org.apache.spark.rdd.RDD import org.scalatest.{ BeforeAndAfterAll, FunSuite, Matchers, Suite } import org.scalatest.prop.GeneratorDrivenPropertyChecks im

我有一个微风矢量的RDD,想计算它们的平均值。我的第一种方法是使用聚合:

import org.apache.spark.{ SparkConf, SparkContext }
import org.apache.spark.rdd.RDD
import org.scalatest.{ BeforeAndAfterAll, FunSuite, Matchers, Suite }
import org.scalatest.prop.GeneratorDrivenPropertyChecks

import breeze.linalg.{ Vector => BreezeVector }

class CalculateMean extends FunSuite with Matchers with GeneratorDrivenPropertyChecks with SparkSpec {

  test("Calculate mean") {

    type U = (BreezeVector[Double], Int)
    type T = BreezeVector[Double]
    val rdd: RDD[T] = sc.parallelize(List(1.0, 2, 3, 4, 5, 6).map { x => BreezeVector(x, x * x) }, 2)

    val zeroValue = (BreezeVector.zeros[Double](2), 0)
    val seqOp = (agg: U, x: T) => (agg._1 + x, agg._2 + 1)
    val combOp = (xs: U, ys: U) => (xs._1 + ys._1, xs._2 + ys._2)

    val mean = rdd.aggregate(zeroValue)(seqOp, combOp)
    println(mean._1 / mean._2.toDouble)

  }

}

/**
 * Setup and tear down spark context
 */
trait SparkSpec extends BeforeAndAfterAll {
  this: Suite =>

  private val master = "local[2]"
  private val appName = this.getClass.getSimpleName

  private var _sc: SparkContext = _

  def sc: org.apache.spark.SparkContext = _sc

  val conf: SparkConf = new SparkConf()
    .setMaster(master)
    .setAppName(appName)

  override def beforeAll(): Unit = {
    super.beforeAll()
    _sc = new SparkContext(conf)
  }

  override def afterAll(): Unit = {
    if (_sc != null) {
      _sc.stop()
      _sc = null
    }

    super.afterAll()
  }
}
但是,该算法可能在数值上不稳定(请参阅)

我如何在Spark中实现Breeze向量,
rdd.aggregate
是推荐的方法吗

如何在Spark中实现微风矢量的Knuths算法,rdd.aggregate是推荐的方法吗

如果Knuth描述的算法是正确的选择,
aggregate
可能是一种很好的方法。不幸的是,它不是,或者至少在没有一些调整的情况下不是。它本质上是顺序流式传输算法,并且它所应用的函数不是关联的。假设您有一个函数
knuth\u mean
。应该明确的是(忽略计数和单元素情况):

这和

(knuth_mean (knuth_mean 1 2) (knuth_mean 3 4))
仍然可以使用Knuth算法获得每个分区的平均值:

def partMean(n: Int)(iter: Iterator[BreezeVector[Double]]) = {
  val partialMean = iter.foldLeft((BreezeVector.zeros[Double](n), 0.0))(
    (acc: (BreezeVector[Double], Double), v: BreezeVector[Double]) => 
      (acc._1 + (v - acc._1) / (acc._2 + 1.0), acc._2 + 1.0))
    Iterator(partialMean)
}

val means = rdd.mapPartitions(partMean(lengthOfVector))
问题在于如何汇总这部分结果。直接应用Knuth算法将需要展开分区,这几乎超过了使用Spark的全部目的。您可以使用该方法查看它在Spark中的内部处理方式

def partMean(n: Int)(iter: Iterator[BreezeVector[Double]]) = {
  val partialMean = iter.foldLeft((BreezeVector.zeros[Double](n), 0.0))(
    (acc: (BreezeVector[Double], Double), v: BreezeVector[Double]) => 
      (acc._1 + (v - acc._1) / (acc._2 + 1.0), acc._2 + 1.0))
    Iterator(partialMean)
}

val means = rdd.mapPartitions(partMean(lengthOfVector))