Spark 2.x发行版是否打破了SQL连接语法?

Spark 2.x发行版是否打破了SQL连接语法?,sql,apache-spark,apache-spark-sql,Sql,Apache Spark,Apache Spark Sql,当我提交一个复杂的联接SQL查询时,通常会给一个或两个操作数取一个较短的名称以阐明我的意图,例如,以下两个查询: SELECT * FROM transactions JOIN accounts ON transactions.cardnumber=accounts.cardnumber 及 应该有同样的效果 我已经在Spark 1.6.3中测试了这两个查询,并且都可以正常工作。但是,在我转到Spark 2.2.1之后,第二个查询抛出了以下错误: org.apache.spark.sql.An

当我提交一个复杂的联接SQL查询时,通常会给一个或两个操作数取一个较短的名称以阐明我的意图,例如,以下两个查询:

SELECT *
FROM transactions
JOIN accounts ON transactions.cardnumber=accounts.cardnumber

应该有同样的效果

我已经在Spark 1.6.3中测试了这两个查询,并且都可以正常工作。但是,在我转到Spark 2.2.1之后,第二个查询抛出了以下错误:

org.apache.spark.sql.AnalysisException: cannot resolve '`left.cardnumber`' given input columns: [name, sku, sin, accountnumber, purchase_date, sin, cardnumber, purchase_date, cardnumber, amount, sku, name, amount]; line 4 pos 17;
'Project [*]
+- 'Join LeftOuter, ('left.cardnumber = cardnumber#77)
   :- SubqueryAlias AS
   :  +- SubqueryAlias transactions
   :     +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).cardnumber, true) AS cardnumber#53, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).name, true) AS name#54, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).amount AS amount#55, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).purchase_date, true) AS purchase_date#56, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).sin, true) AS sin#57, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Transaction, true])).sku, true) AS sku#58]
   :        +- ExternalRDD [obj#52]
   +- SubqueryAlias accounts
      +- SerializeFromObject [staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).accountnumber, true) AS accountnumber#76, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).cardnumber, true) AS cardnumber#77, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).name, true) AS name#78, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).amount AS amount#79, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).purchase_date, true) AS purchase_date#80, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).sin, true) AS sin#81, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(assertnotnull(input[0, com.schedule1.datapassports.spark.TestBeans$Account, true])).sku, true) AS sku#82]
         +- ExternalRDD [obj#75]

    at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:88)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$2.applyOrElse(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformUp$1.apply(TreeNode.scala:289)
    at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:288)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:306)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:304)
    at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:286)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:268)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsUp$1.apply(QueryPlan.scala:268)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:279)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:289)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:290)
    at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$6.apply(QueryPlan.scala:298)
    at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:298)
    at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:268)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:85)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:78)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
    at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreachUp$1.apply(TreeNode.scala:126)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:126)
    at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:78)
    at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:91)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:52)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:632)
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:691)

此故障的原因是什么?我如何修复它?

问题是您使用保留关键字(
LEFT
)作为别名,因此查询被解释为:

选择*
从交易中提取``
LEFT.cardnumber=accounts.cardnumber上的左加入帐户
使用空别名。事实上,以下问题:

选择*
从交易中提取``
在“%1”上左加入帐户。cardnumber=accounts.cardnumber
虽然不是完全等效,但它可以很好地工作。这是标准的SQL行为,不是bug

选择不同的名称,一切都会正常工作:

Seq[Int]().toDF(“卡号”).createOrReplaceTempView(“账户”)
Seq[Int]().toDF(“卡号”).createOrReplaceTempView(“交易”)
sql(““”选择*
从作为l的交易中删除
以r的身份加入帐户
在l.cardnumber=r.cardnumber“”)上
引用别名也可以:

spark.sql("""SELECT *
             FROM transactions AS `left`
             JOIN accounts AS r
             ON left.cardnumber = r.cardnumber""")

谢谢!唯一的变化是解析器,从scala解析器到Antlr 4,这导致一些边缘案例失败
spark.sql("""SELECT *
             FROM transactions AS `left`
             JOIN accounts AS r
             ON left.cardnumber = r.cardnumber""")