Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在Scala、Spark和UDF中使用类型多态性展平要映射的映射序列_Scala_Apache Spark_Generics - Fatal编程技术网

在Scala、Spark和UDF中使用类型多态性展平要映射的映射序列

在Scala、Spark和UDF中使用类型多态性展平要映射的映射序列,scala,apache-spark,generics,Scala,Apache Spark,Generics,我有下面的函数,它将字符串映射序列展平为双精度。如何将类型字符串设置为双泛型 val flattenSeqOfMaps = udf { values: Seq[Map[String, Double]] => values.flatten.toMap } flattenSeqOfMaps: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(S

我有下面的函数,它将字符串映射序列展平为双精度。如何将类型字符串设置为双泛型

val flattenSeqOfMaps = udf { values: Seq[Map[String, Double]] => values.flatten.toMap }
flattenSeqOfMaps: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(StringType,DoubleType,false),Some(List(ArrayType(MapType(StringType,DoubleType,false),true))))
谢谢

编辑1: 我用的是spark 2.3。我了解spark 2.4中的高阶函数

编辑2:我靠近了一点。我需要什么来代替f{in val=udf{f}。请比较joinMap类型签名和下面的Maps类型签名

scala> val joinMap = udf { values: Seq[Map[String, Double]] => values.flatten.toMap }
joinMap: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(StringType,DoubleType,false),Some(List(ArrayType(MapType(StringType,DoubleType,false),true))))

scala> def f[S,D](values: Seq[Map[S, D]]): Map[S,D] = { values.flatten.toMap}
f: [S, D](values: Seq[Map[S,D]])Map[S,D]

scala> val flattenSeqOfMaps = udf { f _}
flattenSeqOfMaps: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(NullType,NullType,true),Some(List(ArrayType(MapType(NullType,NullType,true),true))))
编辑3:以下代码适用于我

scala> val flattenSeqOfMaps = udf { f[String,Double] _}
flattenSeqOfMaps: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(StringType,DoubleType,false),Some(List(ArrayType(MapType(StringType,DoubleType,false),true))))
scala> def f[S,D](values: Seq[Map[S, D]]): Map[S,D] = { values.flatten.toMap}
f: [S, D](values: Seq[Map[S,D]])Map[S,D]

scala> val flattenSeqOfMaps = udf { f[String,Double] _}
flattenSeqOfMaps: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(StringType,DoubleType,false),Some(List(ArrayType(MapType(StringType,DoubleType,false),true))))

而您可以将函数定义为

import scala.reflect.runtime.universe.TypeTag

def flattenSeqOfMaps[S : TypeTag, D: TypeTag] = udf { 
  values: Seq[Map[S, D]] => values.flatten.toMap
}
然后使用特定实例:

val df = Seq(Seq(Map("a" -> 1), Map("b" -> 1))).toDF("val")

val flattenSeqOfMapsStringInt = flattenSeqOfMaps[String, Int]

df.select($"val", flattenSeqOfMapsStringInt($"val") as "val").show
+----------+--------+ |瓦尔|瓦尔| +----------+--------+ |[a->1],[b->1]|[a->1,b->1]| +----------+--------| 也可以使用内置函数,而不需要显式泛型:

import org.apache.spark.sql.functions.{expr, flatten, map_from_arrays}

def flattenSeqOfMaps_(col: String) = {
  val keys = flatten(expr(s"transform(`$col`, x -> map_keys(x))"))
  val values = flatten(expr(s"transform(`$col`, x -> map_values(x))"))
  map_from_arrays(keys, values)
}

df.select($"val", flattenSeqOfMaps_("val") as "val").show
+----------+--------+ |瓦尔|瓦尔| +----------+--------+ |[a->1],[b->1]|[a->1,b->1]| +----------+--------+
而您可以将函数定义为

import scala.reflect.runtime.universe.TypeTag

def flattenSeqOfMaps[S : TypeTag, D: TypeTag] = udf { 
  values: Seq[Map[S, D]] => values.flatten.toMap
}
然后使用特定实例:

val df = Seq(Seq(Map("a" -> 1), Map("b" -> 1))).toDF("val")

val flattenSeqOfMapsStringInt = flattenSeqOfMaps[String, Int]

df.select($"val", flattenSeqOfMapsStringInt($"val") as "val").show
+----------+--------+ |瓦尔|瓦尔| +----------+--------+ |[a->1],[b->1]|[a->1,b->1]| +----------+--------| 也可以使用内置函数,而不需要显式泛型:

import org.apache.spark.sql.functions.{expr, flatten, map_from_arrays}

def flattenSeqOfMaps_(col: String) = {
  val keys = flatten(expr(s"transform(`$col`, x -> map_keys(x))"))
  val values = flatten(expr(s"transform(`$col`, x -> map_values(x))"))
  map_from_arrays(keys, values)
}

df.select($"val", flattenSeqOfMaps_("val") as "val").show
+----------+--------+ |瓦尔|瓦尔| +----------+--------+ |[a->1],[b->1]|[a->1,b->1]| +----------+--------+
下面的代码适用于我

scala> val flattenSeqOfMaps = udf { f[String,Double] _}
flattenSeqOfMaps: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(StringType,DoubleType,false),Some(List(ArrayType(MapType(StringType,DoubleType,false),true))))
scala> def f[S,D](values: Seq[Map[S, D]]): Map[S,D] = { values.flatten.toMap}
f: [S, D](values: Seq[Map[S,D]])Map[S,D]

scala> val flattenSeqOfMaps = udf { f[String,Double] _}
flattenSeqOfMaps: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(StringType,DoubleType,false),Some(List(ArrayType(MapType(StringType,DoubleType,false),true))))

下面的代码适用于我

scala> val flattenSeqOfMaps = udf { f[String,Double] _}
flattenSeqOfMaps: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(StringType,DoubleType,false),Some(List(ArrayType(MapType(StringType,DoubleType,false),true))))
scala> def f[S,D](values: Seq[Map[S, D]]): Map[S,D] = { values.flatten.toMap}
f: [S, D](values: Seq[Map[S,D]])Map[S,D]

scala> val flattenSeqOfMaps = udf { f[String,Double] _}
flattenSeqOfMaps: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,MapType(StringType,DoubleType,false),Some(List(ArrayType(MapType(StringType,DoubleType,false),true))))

我的错。我没有提到spark 2.3版本。编辑问题,我的错。我没有提到spark 2.3版本。编辑问题。