Scala Spark数据帧中的SumProduct

Scala Spark数据帧中的SumProduct,scala,apache-spark,dataframe,apache-spark-sql,Scala,Apache Spark,Dataframe,Apache Spark Sql,我想在Spark数据框架中跨列创建一个sumproduct。我有一个如下所示的数据帧: id val1 val2 val3 val4 123 10 5 7 5 import org.apache.spark.sql.Column import org.apache.spark.sql.functions.{col, lit} val df = sc.parallelize(Seq( (123, 10, 5, 7, 5), (456,

我想在Spark数据框架中跨列创建一个sumproduct。我有一个如下所示的数据帧:

id    val1   val2   val3   val4
123   10     5      7      5
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions.{col, lit}

val df = sc.parallelize(Seq(
    (123, 10, 5, 7, 5), (456,  1, 1, 1, 1)
)).toDF("k", "val1", "val2", "val3", "val4")

val coefficients = Map("val1" -> 1, "val2" -> 2, "val3" -> 3, "val4" -> 4)

val dotProduct: Column = coefficients
  // To be explicit you can replace
  // col(k) * v with col(k) * lit(v)
  // but it is not required here
  // since we use * f Column.* method not Int.*
  .map{ case (k, v) => col(k) * v }  // * -> Column.*
  .reduce(_ + _)  // + -> Column.+

df.withColumn("mySum", dotProduct).show
// +---+----+----+----+----+-----+
// |  k|val1|val2|val3|val4|mySum|
// +---+----+----+----+----+-----+
// |123|  10|   5|   7|   5|   61|
// |456|   1|   1|   1|   1|   10|
// +---+----+----+----+----+-----+
我还有一张地图,看起来像:

val coefficents = Map("val1" -> 1, "val2" -> 2, "val3" -> 3, "val4" -> 4)
我想取DataFrame的每一列中的值,乘以映射中相应的值,然后在一个新列中返回结果,因此本质上:

(10*1) + (5*2) + (7*3) + (5*4) = 61
我试过这个:

val myDF1 = myDF.withColumn("mySum", {var a:Double = 0.0; for ((k,v) <- coefficients) a + (col(k).cast(DoubleType)*coefficients(k));a})

val myDF1=myDF.withColumn(“mySum”{var a:Double=0.0;for((k,v)问题似乎在于您实际上没有使用
a做任何事情

for((k, v) <- coefficients) a + ...

问题似乎在于,您实际上没有使用
a

for((k, v) <- coefficients) a + ...

我不确定这是否可以通过DataFrame API实现,因为您只能处理列,而不能处理任何预定义的闭包(例如,您的参数映射)

我在下面概述了使用数据帧的底层RDD的方法:

import org.apache.spark.sql.types._
import org.apache.spark.sql.Row

// Initializing your input example.
val df1 = sc.parallelize(Seq((123, 10, 5, 7, 5))).toDF("id", "val1", "val2", "val3", "val4")

// Return column names as an array
val names = df1.columns

// Grab underlying RDD and zip elements with column names
val rdd1 = df1.rdd.map(row => (0 until row.length).map(row.getInt(_)).zip(names))

// Tack on accumulated total to the existing row
val rdd2 = rdd0.map { seq => Row.fromSeq(seq.map(_._1) :+ seq.map { case (value: Int, name: String) => value * coefficents.getOrElse(name, 0) }.sum) }

// Create output schema (with total)
val totalSchema = StructType(df1.schema.fields :+ StructField("total", IntegerType))

// Apply schema to create output dataframe
val df2 = sqlContext.createDataFrame(rdd1, totalSchema)

// Show output:
df2.show()
...
+---+----+----+----+----+-----+
| id|val1|val2|val3|val4|total|
+---+----+----+----+----+-----+
|123|  10|   5|   7|   5|   61|
+---+----+----+----+----+-----+

我不确定这是否可以通过DataFrame API实现,因为您只能处理列,而不能处理任何预定义的闭包(例如,您的参数映射)

我在下面概述了使用数据帧的底层RDD的方法:

import org.apache.spark.sql.types._
import org.apache.spark.sql.Row

// Initializing your input example.
val df1 = sc.parallelize(Seq((123, 10, 5, 7, 5))).toDF("id", "val1", "val2", "val3", "val4")

// Return column names as an array
val names = df1.columns

// Grab underlying RDD and zip elements with column names
val rdd1 = df1.rdd.map(row => (0 until row.length).map(row.getInt(_)).zip(names))

// Tack on accumulated total to the existing row
val rdd2 = rdd0.map { seq => Row.fromSeq(seq.map(_._1) :+ seq.map { case (value: Int, name: String) => value * coefficents.getOrElse(name, 0) }.sum) }

// Create output schema (with total)
val totalSchema = StructType(df1.schema.fields :+ StructField("total", IntegerType))

// Apply schema to create output dataframe
val df2 = sqlContext.createDataFrame(rdd1, totalSchema)

// Show output:
df2.show()
...
+---+----+----+----+----+-----+
| id|val1|val2|val3|val4|total|
+---+----+----+----+----+-----+
|123|  10|   5|   7|   5|   61|
+---+----+----+----+----+-----+

代码的问题是,您试图将
列添加到
Double
cast(DoubleType)
只影响存储值的一种类型,而不影响列本身的一种类型。由于
Double
不提供
*(x:org.apache.spark.sql.Column):org.apache.spark.sql.Column
方法,所有操作都失败

例如,要使其正常工作,您可以执行以下操作:

id    val1   val2   val3   val4
123   10     5      7      5
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions.{col, lit}

val df = sc.parallelize(Seq(
    (123, 10, 5, 7, 5), (456,  1, 1, 1, 1)
)).toDF("k", "val1", "val2", "val3", "val4")

val coefficients = Map("val1" -> 1, "val2" -> 2, "val3" -> 3, "val4" -> 4)

val dotProduct: Column = coefficients
  // To be explicit you can replace
  // col(k) * v with col(k) * lit(v)
  // but it is not required here
  // since we use * f Column.* method not Int.*
  .map{ case (k, v) => col(k) * v }  // * -> Column.*
  .reduce(_ + _)  // + -> Column.+

df.withColumn("mySum", dotProduct).show
// +---+----+----+----+----+-----+
// |  k|val1|val2|val3|val4|mySum|
// +---+----+----+----+----+-----+
// |123|  10|   5|   7|   5|   61|
// |456|   1|   1|   1|   1|   10|
// +---+----+----+----+----+-----+

代码的问题是,您试图将
列添加到
Double
cast(DoubleType)
只影响存储值的一种类型,而不影响列本身的一种类型。由于
Double
不提供
*(x:org.apache.spark.sql.Column):org.apache.spark.sql.Column
方法,所有操作都失败

例如,要使其正常工作,您可以执行以下操作:

id    val1   val2   val3   val4
123   10     5      7      5
import org.apache.spark.sql.Column
import org.apache.spark.sql.functions.{col, lit}

val df = sc.parallelize(Seq(
    (123, 10, 5, 7, 5), (456,  1, 1, 1, 1)
)).toDF("k", "val1", "val2", "val3", "val4")

val coefficients = Map("val1" -> 1, "val2" -> 2, "val3" -> 3, "val4" -> 4)

val dotProduct: Column = coefficients
  // To be explicit you can replace
  // col(k) * v with col(k) * lit(v)
  // but it is not required here
  // since we use * f Column.* method not Int.*
  .map{ case (k, v) => col(k) * v }  // * -> Column.*
  .reduce(_ + _)  // + -> Column.+

df.withColumn("mySum", dotProduct).show
// +---+----+----+----+----+-----+
// |  k|val1|val2|val3|val4|mySum|
// +---+----+----+----+----+-----+
// |123|  10|   5|   7|   5|   61|
// |456|   1|   1|   1|   1|   10|
// +---+----+----+----+----+-----+