Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
scala spark UDF ClassCastException:WrappedArray$ofRef不能强制转换为[Lscala.Tuple2]_Scala_Apache Spark_User Defined Functions - Fatal编程技术网

scala spark UDF ClassCastException:WrappedArray$ofRef不能强制转换为[Lscala.Tuple2]

scala spark UDF ClassCastException:WrappedArray$ofRef不能强制转换为[Lscala.Tuple2],scala,apache-spark,user-defined-functions,Scala,Apache Spark,User Defined Functions,因此,我执行必要的导入等 import org.apache.spark.sql.functions.udf import org.apache.spark.sql.types._ import spark.implicits._ 然后定义一些拉长点 val london = (1.0, 1.0) val suburbia = (2.0, 2.0) val southampton = (3.0, 3.0) val york = (4.0, 4.0) 然后,我创建一个类似这样的spar

因此,我执行必要的导入等

import org.apache.spark.sql.functions.udf
import org.apache.spark.sql.types._
import spark.implicits._
然后定义一些拉长点

val london = (1.0, 1.0)
val suburbia = (2.0, 2.0)
val southampton = (3.0, 3.0)  
val york = (4.0, 4.0)  
然后,我创建一个类似这样的spark数据帧,并检查它是否工作:

val exampleDF = Seq((List(london,suburbia),List(southampton,york)),
    (List(york,london),List(southampton,suburbia))).toDF("AR1","AR2")
exampleDF.show()
数据帧由以下类型组成

DataFrame=[AR1:array,AR2:array]

我创建一个函数来创建点的组合

// function to do what I want
val latlongexplode =  (x: Array[(Double,Double)], y: Array[(Double,Double)]) => {
 for (a <- x; b <-y) yield (a,b)
}
确实如此。但是在我用这个函数创建了一个自定义项之后

// declare function into a Spark UDF
val latlongexplodeUDF = udf (latlongexplode) 
当我尝试在上面创建的spark数据框中使用它时,如下所示:

exampleDF.withColumn("latlongexplode", latlongexplodeUDF($"AR1",$"AR2")).show(false)
我得到一个很长的堆栈跟踪,基本上归结为:

java.lang.ClassCastException: scala.collection.mutable.WrappedArray$ofRef不能强制转换为 [Lscala.Tuple2;
org.apache.spark.sql.catalyst.expressions.ScalaUDF.$anonfun$f$3(ScalaUDF.scala:121) org.apache.spark.sql.catalyst.expressions.ScalaUDF.eval(ScalaUDF.scala:1063) org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala:151) org.apache.spark.sql.catalyst.expressions.解释器项目.apply(Projection.scala:50) org.apache.spark.sql.catalyst.expressions.解释器项目.apply(Projection.scala:32) scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:273)

如何让这个udf在Scala Spark中工作?(如果有帮助,我现在使用2.4)

编辑:可能是我构建示例df的方式有问题。
但我所拥有的实际数据是每列上的lat/long元组数组(大小未知)。

在UDF中使用结构类型时,它们表示为行对象,数组列表示为Seq。此外,您需要以行的形式返回结构,并且需要定义一个模式来返回结构

import org.apache.spark.sql.Row
import org.apache.spark.sql.types._

val london = (1.0, 1.0)
val suburbia = (2.0, 2.0)
val southampton = (3.0, 3.0)  
val york = (4.0, 4.0)
val exampleDF = Seq((List(london,suburbia),List(southampton,york)),
    (List(york,london),List(southampton,suburbia))).toDF("AR1","AR2")
exampleDF.show(false)
+------------------------+------------------------+
|AR1                     |AR2                     |
+------------------------+------------------------+
|[[1.0, 1.0], [2.0, 2.0]]|[[3.0, 3.0], [4.0, 4.0]]|
|[[4.0, 4.0], [1.0, 1.0]]|[[3.0, 3.0], [2.0, 2.0]]|
+------------------------+------------------------+

你可能想就此联系Raphael Roth,他似乎比大多数人走得更远。这与数组的结构方面有关,但我不知道如何解决这个问题。@raphaelroth你能评论一下吗?@Bluephantom不需要Raphael,我已经解决了:)@mck谢谢你的解释…和解决方案。非常感谢。印象深刻,我就快到了。明天再试。刚刚看到。错误消息很难理解。我本以为需要一个case类。好了。@BluePhantom是的,我想case类可能更好-定义udf模式已被弃用。但是对于要定义的case类,结构似乎有点复杂,所以我选择了udf模式。OP是不管怎么说,使用spark 2.4并不成问题。@mck+thebluephantom非常感谢你们两位!我与Mamonu合作开发了一款名为Splink的开源数据链接软件,该软件使用spark,非常有用!
import org.apache.spark.sql.Row
import org.apache.spark.sql.types._

val london = (1.0, 1.0)
val suburbia = (2.0, 2.0)
val southampton = (3.0, 3.0)  
val york = (4.0, 4.0)
val exampleDF = Seq((List(london,suburbia),List(southampton,york)),
    (List(york,london),List(southampton,suburbia))).toDF("AR1","AR2")
exampleDF.show(false)
+------------------------+------------------------+
|AR1                     |AR2                     |
+------------------------+------------------------+
|[[1.0, 1.0], [2.0, 2.0]]|[[3.0, 3.0], [4.0, 4.0]]|
|[[4.0, 4.0], [1.0, 1.0]]|[[3.0, 3.0], [2.0, 2.0]]|
+------------------------+------------------------+
val latlongexplode = (x: Seq[Row], y: Seq[Row]) => {
    for (a <- x; b <- y) yield Row(a, b)
}

val udf_schema = ArrayType(
    StructType(Seq(
        StructField(
            "city1",
            StructType(Seq(
                StructField("lat", FloatType),
                StructField("long", FloatType)
            ))
        ),
        StructField(
            "city2",
            StructType(Seq(
                StructField("lat", FloatType),
                StructField("long", FloatType)
            ))
        )
    ))
)

// include this line if you see errors like 
// "You're using untyped Scala UDF, which does not have the input type information."
// spark.sql("set spark.sql.legacy.allowUntypedScalaUDF = true")

val latlongexplodeUDF = udf(latlongexplode, udf_schema)
result = exampleDF.withColumn("latlongexplode", latlongexplodeUDF($"AR1",$"AR2"))
result.show(false)
+------------------------+------------------------+--------------------------------------------------------------------------------------------------------+
|AR1                     |AR2                     |latlongexplode                                                                                          |
+------------------------+------------------------+--------------------------------------------------------------------------------------------------------+
|[[1.0, 1.0], [2.0, 2.0]]|[[3.0, 3.0], [4.0, 4.0]]|[[[1.0, 1.0], [3.0, 3.0]], [[1.0, 1.0], [4.0, 4.0]], [[2.0, 2.0], [3.0, 3.0]], [[2.0, 2.0], [4.0, 4.0]]]|
|[[4.0, 4.0], [1.0, 1.0]]|[[3.0, 3.0], [2.0, 2.0]]|[[[4.0, 4.0], [3.0, 3.0]], [[4.0, 4.0], [2.0, 2.0]], [[1.0, 1.0], [3.0, 3.0]], [[1.0, 1.0], [2.0, 2.0]]]|
+------------------------+------------------------+--------------------------------------------------------------------------------------------------------+