Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 如何使用在Spark联接中创建的列?-模糊错误_Scala_Apache Spark - Fatal编程技术网

Scala 如何使用在Spark联接中创建的列?-模糊错误

Scala 如何使用在Spark联接中创建的列?-模糊错误,scala,apache-spark,Scala,Apache Spark,我在scala已经为此奋斗了一段时间,但我似乎找不到一个明确的解决方案 我有两个数据帧: val Companies = Seq( (8, "Yahoo"), (-5, "Google"), (12, "Microsoft"), (-10, "Uber") ).toDF("movement", "Company") 我需要在公司中创建一列,允许我加入查找表。这是一个简单的案例陈述,检查走势是否为负值,然后卖出,否则买入。然后,我需要在这个新创建的列上加入查找表 val join

我在scala已经为此奋斗了一段时间,但我似乎找不到一个明确的解决方案

我有两个数据帧:

val Companies = Seq(
  (8, "Yahoo"),
  (-5, "Google"),
  (12, "Microsoft"),
  (-10, "Uber")
).toDF("movement", "Company")
我需要在公司中创建一列,允许我加入查找表。这是一个简单的案例陈述,检查走势是否为负值,然后卖出,否则买入。然后,我需要在这个新创建的列上加入查找表

val joined = Companies.as("Companies")
    .withColumn("Code",expr("CASE WHEN movement > 0 THEN 'B' ELSE 'S' END"))
    .join(LookUpTable.as("LookUpTable"), $"LookUpTable.Code" === $"Code", "left_outer")
但是,我不断遇到以下错误:

org.apache.spark.sql.AnalysisException: Reference 'Code' is ambiguous, could be: Code, LookUpTable.Code.;
  at org.apache.spark.sql.catalyst.expressions.package$AttributeSeq.resolve(package.scala:259)
  at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveChildren(LogicalPlan.scala:101)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$40.apply(Analyzer.scala:888)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$40.apply(Analyzer.scala:890)
  at org.apache.spark.sql.catalyst.analysis.package$.withPosition(package.scala:53)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveReferences$$resolve(Analyzer.scala:887)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveReferences$$resolve$2.apply(Analyzer.scala:896)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveReferences$$resolve$2.apply(Analyzer.scala:896)
  at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:329)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:327)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveReferences$$resolve(Analyzer.scala:896)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$35.apply(Analyzer.scala:956)
  at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveReferences$$anonfun$apply$9$$anonfun$applyOrElse$35.apply(Analyzer.scala:956)
  at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:105)
  at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:105
我已尝试为代码添加别名,但不起作用:

val joined = Companies.as("Companies")
    .withColumn("Code",expr("CASE WHEN movement > 0 THEN 'B' ELSE 'S' END"))
    .join(LookUpTable.as("LookUpTable"), $"LookUpTable.Code" === $"Companies.Code", "left_outer")

org.apache.spark.sql.AnalysisException: cannot resolve '`Companies.Code`' given input columns: [Code, LookUpTable.Code, LookUpTable.Description, Companies.Company, Companies.movement];;
'Join LeftOuter, (Code#102625 = 'Companies.Code)
:- Project [movement#102616, Company#102617, CASE WHEN (movement#102616 > 0) THEN B ELSE S END AS Code#102629]
:  +- SubqueryAlias `Companies`
:     +- Project [_1#102613 AS movement#102616, _2#102614 AS Company#102617]
:        +- LocalRelation [_1#102613, _2#102614]
+- SubqueryAlias `LookUpTable`
   +- Project [_1#102622 AS Code#102625, _2#102623 AS Description#102626]
      +- LocalRelation [_1#102622, _2#102623]

我发现唯一的解决办法是为新创建的列添加别名,但是这样会创建一个感觉不正确的额外列


val joined = Companies.as("Companies")
    .withColumn("_Code",expr("CASE WHEN movement > 0 THEN 'B' ELSE 'S' END")).as("Code")
    .join(LookUpTable.as("LookUpTable"), $"LookUpTable.Code" === $"Code", "left_outer")


joined.show()

+--------+---------+-----+----+-----------+
|movement|  Company|_Code|Code|Description|
+--------+---------+-----+----+-----------+
|       8|    Yahoo|    B|   B|        Buy|
|       8|    Yahoo|    B|   S|       Sell|
|      -5|   Google|    S|   B|        Buy|
|      -5|   Google|    S|   S|       Sell|
|      12|Microsoft|    B|   B|        Buy|
|      12|Microsoft|    B|   S|       Sell|
|     -10|     Uber|    S|   B|        Buy|
|     -10|     Uber|    S|   S|       Sell|
+--------+---------+-----+----+-----------+


有没有一种方法可以在不必通过别名创建新数据框或新列的情况下连接新创建的列?

如果需要来自同名的两个不同数据框的列,则需要使用别名。这是因为Spark dataframe API为所述dataframe创建了一个模式,并且在给定的模式中,您永远不能有两个或更多具有相同名称的列


这也是为什么在
SQL
中,不带别名的
SELECT
查询可以工作的原因,但是如果您将
创建表作为SELECT
,它将抛出一个错误,如-
重复列

您是否尝试过在Spark dataframe中使用Seq

1.使用Seq 无重复列

  • 别名位于withColumn之后,但它将生成重复的列

  • 表达式可用于联接:

    val codeExpression = expr("CASE WHEN movement > 0 THEN 'B' ELSE 'S' END")
    val joined = Companies.as("Companies")
      .join(LookUpTable.as("LookUpTable"), $"LookUpTable.Code" === codeExpression, "left_outer")
    

    谢谢你。这可能有效,但在我们的项目中,我们有超过2000个表达式,要创建这些表达式而不引起争论可能有点困难。谢谢Mahesh。第二个选项是否允许我有多个同名别名?i、 e.```val joined=companys.withColumn(“Code”,expr(“移动时的大小写>0,然后是“B”或“END”)).as(“companys”).withColumn(“Description”,expr(“TRIM(Description)”).as(“companys”).join(LookUpTable.as(“LookUpTable”),$“LookUpTable.Code”====$“companys.Code”,“left_outer”)。选择($“companys.Code”,$companys.Description”)```@Nirmie请投赞成票,如果你的要求得到满足,请接受答案。@Nirmie你只需要保持最后一个。as(“公司”)不需要每次都给我们。as(“公司”)所以它会是这样的```val joined=companys.withColumn(“Code”,expr(“CASE WHEN movement>0,然后是'B'ELSE'END”)。withColumn(“Description”,expr(“TRIM”)(Description)”).as(“companys”).join(LookUpTable.as(“LookUpTable”),$“LookUpTable.Code”===$“companys.Code”,“left_outer”)。选择($“companys.Code”,$companys.Description”)````
    val joined = Companies.as("Companies")
        .withColumn("Code",expr("CASE WHEN movement > 0 THEN 'B' ELSE 'S' END"))
        .join(LookUpTable.as("LookUpTable"), Seq("Code"), "left_outer")
    
    val joined = Companies.withColumn("Code",expr("CASE WHEN movement > 0 THEN 'B' ELSE 'S' END")).as("Companies")
    .join(LookUpTable.as("LookUpTable"), $"LookUpTable.Code" === $"Companies.Code", "left_outer")
    
    val codeExpression = expr("CASE WHEN movement > 0 THEN 'B' ELSE 'S' END")
    val joined = Companies.as("Companies")
      .join(LookUpTable.as("LookUpTable"), $"LookUpTable.Code" === codeExpression, "left_outer")