Apache spark 通过spark dataframe将整行作为参数传递给spark udf-引发AnalysisException

Apache spark 通过spark dataframe将整行作为参数传递给spark udf-引发AnalysisException,apache-spark,apache-spark-sql,Apache Spark,Apache Spark Sql,我试图将整行连同其他几个参数一起传递给spark udf,我没有使用spark sql,而是使用带有列api的数据帧,但我得到了以下异常: Exception in thread "main" org.apache.spark.sql.AnalysisException: Resolved attribute(s) col3#9 missing from col1#7,col2#8,col3#13 in operator !Project [col1#7, col2#8, col3#13, UD

我试图将整行连同其他几个参数一起传递给spark udf,我没有使用spark sql,而是使用带有列api的数据帧,但我得到了以下异常:

Exception in thread "main" org.apache.spark.sql.AnalysisException: Resolved attribute(s) col3#9 missing from col1#7,col2#8,col3#13 in operator !Project [col1#7, col2#8, col3#13, UDF(col3#9, col2, named_struct(col1, col1#7, col2, col2#8, col3, col3#9)) AS contcatenated#17]. Attribute(s) with the same name appear in the operation: col3. Please check if the right attribute(s) are used.;;
可以使用以下代码复制上述异常:

    addRowUDF() // call invokes

    def addRowUDF() {
        import org.apache.spark.SparkConf
        import org.apache.spark.sql.SparkSession

        val spark = SparkSession.builder().config(new SparkConf().set("master", "local[*]")).appName(this.getClass.getSimpleName).getOrCreate()

        import spark.implicits._
        val df = Seq(
          ("a", "b", "c"),
          ("a1", "b1", "c1")).toDF("col1", "col2", "col3")
        execute(df)
      }

  def execute(df: org.apache.spark.sql.DataFrame) {

    import org.apache.spark.sql.Row
    def concatFunc(x: Any, y: String, row: Row) = x.toString + ":" + y + ":" + row.mkString(", ")

    import org.apache.spark.sql.functions.{ udf, struct }

    val combineUdf = udf((x: Any, y: String, row: Row) => concatFunc(x, y, row))

    def udf_execute(udf: String, args: org.apache.spark.sql.Column*) = (combineUdf)(args: _*)

    val columns = df.columns.map(df(_))

    val df2 = df.withColumn("col3", lit("xxxxxxxxxxx"))

    val df3 = df2.withColumn("contcatenated", udf_execute("uudf", df2.col("col3"), lit("col2"), struct(columns: _*)))

    df3.show(false)
  }
输出应为:

+----+----+-----------+----------------------------+
|col1|col2|col3       |contcatenated               |
+----+----+-----------+----------------------------+
|a   |b   |xxxxxxxxxxx|xxxxxxxxxxx:col2:a, b, c    |
|a1  |b1  |xxxxxxxxxxx|xxxxxxxxxxx:col2:a1, b1, c1 |
+----+----+-----------+----------------------------+

这是因为您引用的列不再在范围内。当你打电话时:

val df2=df.withColumn(“col3”,点亮(“xxxxxxxxxx”))
它对原始的
col3
列进行着色处理,有效地使前面具有相同名称的列可以访问。即使不是这样,我们也可以说:

val df2 = df.select($"*", lit("xxxxxxxxxxx") as "col3")
新的
col3
将模棱两可,从名称上看与
*
定义的名称无法区分

因此,要获得所需的输出,您必须使用另一个名称:

val df2=df.withColumn(“col3”),lit(“xxxxxxxxxx”))
然后相应地调整代码的其余部分:

df2.withColumn(
“挫伤”,
udf_执行(“uudf”,df2.col(“col3_u”)为“col3”,
lit(“col2”),结构(列:*)
).drop(“_3”)
如果逻辑与示例中的逻辑一样简单,那么您当然可以直接内联:

df.withColumn(
“挫伤”,
udf_执行(“uudf”,点亮(“XXXXXXXXXX”)为“col3”,
lit(“col2”),结构(列:*)
).drop(“_3”)

“它覆盖了原来的col3列,并将其从计划中删除”-这是我逐渐了解到的,但仍需要深入了解它是如何从计划中删除的,虽然我会将其标记为已回答,但我的实际问题略有不同,我已在当天晚些时候自己解决了该问题,感谢您的回复,:)但是你能解释一下它是如何从逻辑计划中删除的吗。?我在spark源代码中进行了探索,但无法获得太多ideaspark source Dataset.scala private[spark]def with columns(colNames:Seq[String],cols:Seq[Column]):DataFrame={…}