比较来自不同DF、Spark scala的两列

比较来自不同DF、Spark scala的两列,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,我试图比较两列,每一列来自不同的DF。我有两个DF: df1 +----+-------+-------+ |Game|rev_1_t|rev_2_t| +----+-------+-------+ | CA| AA | AA | | FT| B | C | +----+-------+-------+ df_prev +----+-------+-------+ |Game|rev_1_t|rev_2_t| +----+-------+-------+ |

我试图比较两列,每一列来自不同的DF。我有两个DF:

df1
+----+-------+-------+
|Game|rev_1_t|rev_2_t|
+----+-------+-------+
|  CA|    AA |    AA |
|  FT|    B  |    C  |
+----+-------+-------+

df_prev
+----+-------+-------+
|Game|rev_1_t|rev_2_t|
+----+-------+-------+
|  CA|    C  |   AA  |  
|  FT|    B  |   C   |
+----+-------+-------+
我想比较来自df1的rev_1_t和来自df_prev的rev_1_t,如果有更改,则在名为change的新列中添加“Y”和“N”(如果没有更改)。同时,我想添加一个名为prev_value的新列,其中存储df_prev中rev_1_t的先前值。第2版也一样。产出将是:

Output:
+----+-------+--------+------------+---------+----------+--------------+
|Game|rev_1_t| change | prev_value | rev_2_t | change_2 | prev_value_2 | 
+----+-------+--------+------------+---------+----------+--------------+
|  CA|    C  |   Y    |      C     |     AA  |   Y      |      C       |
|  FT|    B  |   Y    |      B     |     C   |   Y      |      B       |
+----+-------+--------+------------+---------+----------+--------------+
我正试图按照你在这里看到的那样做,但我有不同的错误:

val change = df1.withColumn(
   "change", when (df1("rev_1_t") === df_prev("rev_1_t"), df1("rev_1_t")).otherwise(df_prev("rev_1_t"))
  .withColumn(
   "prev_value", when(df1("rev_1_t") === df_prev("rev_1_t"), "N").otherwise("Y"))

您可以进行联接,然后比较相关列:

import org.apache.spark.sql.expressions.Window

val result = df1.join(df_prev, Seq("Game"), "left")
    .select(col("Game"), 
            df1("rev_1_t"), 
            when(df1("rev_1_t") === df_prev("rev_1_t"), "N").otherwise("Y").as("change"), 
            df_prev("rev_1_t").as("prev_value"), 
            df1("rev_2_t"), 
            when(df1("rev_2_t") === df_prev("rev_2_t"), "N").otherwise("Y").as("change_2"), 
            df_prev("rev_2_t").as("prev_value_2")
    )
    .withColumn("change", max("change").over(Window.orderBy(lit(1))))
    .withColumn("change_2", max("change_2").over(Window.orderBy(lit(1))))

result.show
+----+-------+------+----------+-------+--------+------------+
|Game|rev_1_t|change|prev_value|rev_2_t|change_2|prev_value_2|
+----+-------+------+----------+-------+--------+------------+
|  CA|     AA|     Y|         C|     AA|       N|          AA|
|  FT|      B|     Y|         B|      C|       N|           C|
+----+-------+------+----------+-------+--------+------------+

哇,你总是那么快!我的错,这里的“更改”列中,如果只有一个或多个更改,则始终为“Y”,否则为“N”,因此在您的输出示例中,游戏FT的列更改也将为Y。嗨,麦克!我收到以下错误代码:AnalysisException:Spark中已解析的属性。有什么问题吗?我正在尝试将DF克隆到另一个DF中,但错误仍然存在…可能与重复的列名有关。。。