Apache spark org.apache.spark.sql.AnalysisException:无法从总和中提取值(_c930);

Apache spark org.apache.spark.sql.AnalysisException:无法从总和中提取值(_c930);,apache-spark,apache-spark-sql,Apache Spark,Apache Spark Sql,我正在使用spark sql选择一列以及另一列的总和: 以下是我的疑问: scala> spark.sql("select distinct _c3,sum(_c9).as(sumAadhar) from aadhar group by _c3 order by _c9 desc LIMIT 3").show 我的模式是: root |-- _c0: string (nullable = true) |-- _c1: string (nullable = true) |-

我正在使用spark sql选择一列以及另一列的总和: 以下是我的疑问:

scala> spark.sql("select distinct _c3,sum(_c9).as(sumAadhar)  from aadhar group by _c3 order by _c9 desc LIMIT 3").show
我的模式是:

    root
 |-- _c0: string (nullable = true)
 |-- _c1: string (nullable = true)
 |-- _c2: string (nullable = true)
 |-- _c3: string (nullable = true)
 |-- _c4: string (nullable = true)
 |-- _c5: string (nullable = true)
 |-- _c6: string (nullable = true)
 |-- _c7: string (nullable = true)
 |-- _c8: string (nullable = true)
 |-- _c9: double (nullable = true)
 |-- _c10: string (nullable = true)
 |-- _c11: string (nullable = true)
 |-- _c12: string (nullable = true)
我发现了一个错误:

org.apache.spark.sql.AnalysisException: Can't extract value from sum(_c9#30);
  at org.apache.spark.sql.catalyst.expressions.ExtractValue$.apply(complexTypeExtractors.scala:73)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:613)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$5.applyOrElse(Analyzer.scala:605)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:308)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:308)
  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:307)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:305)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:305)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:328)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:186)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:326)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:305)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionUp$1(QueryPlan.scala:269)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2(QueryPlan.scala:279)
  at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$2$1.apply(QueryPlan.scala:283)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
  at scala.collection.immutable.List.foreach(List.scala:381)
  at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
  at scala.collection.immutable.List.map(List.scala:285)

任何想法,我做错了什么,或者是否有任何其他方法可以对以下列的值求和,该列在简化模式上进行了尝试:

scala> val df = Seq(("a", 2), ("a", 3), ("b", 4), ("a", 9), ("b", 1), ("c", 100)).toDF("_c3", "_c9") df: org.apache.spark.sql.DataFrame = [_c3: string, _c9: int]

scala> df.createOrReplaceTempView("aadhar")

scala> spark.sql("select _c3,sum(_c9) as sumAadhar from aadhar group by _c3 order by sumAadhar desc LIMIT 3").show
+---+---------+ 
|_c3|sumAadhar|
+---+---------+ 
|  c|      100| 
|  a|       14| 
|  b|        5|
+---+---------+
删除了distinct,因为它不是必需的,因为您的原始查询已按_c3分组。 将sum_c9.assumAadhar更改为sum_c9作为sumAadhar,因为我认为语法导致spark sql执行一些非预期的强制转换。
获取相同错误:org.apache.spark.sql.AnalysisException:无法解析给定输入列的“\u c9”:[u c3,sumCAST\u c9为双精度];第1行位置55;但当我使用alias时,它工作得很好:spark.sqlselect _c3,sum _c9作为adh组的总计,按_c3顺序,按总计限制3。show@KumarHarsh,从你上次的评论来看,这个解决方案似乎对你有效。如果是这样,考虑标记作为答案。