Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
将scala中的两个数据帧与没有精确值的列连接_Scala_Dataframe_Apache Spark_Hive_Apache Spark Sql - Fatal编程技术网

将scala中的两个数据帧与没有精确值的列连接

将scala中的两个数据帧与没有精确值的列连接,scala,dataframe,apache-spark,hive,apache-spark-sql,Scala,Dataframe,Apache Spark,Hive,Apache Spark Sql,我尝试将两个数据帧合并到一个值不完全相同的列中 下面给出的是DF1 +--------+-----+------+ | NUM_ID | TIME|SG1_V | +--------+-----+------+ |XXXXX01 |1001 |79.0 | |XXXXX01 |1005 |88.0 | |XXXXX01 |1010 |99.0 | |XXXXX01 |1015 |null | |XXXXX01 |1020 |100.0 | |XXXXX02 |1001 |81.0 |

我尝试将两个数据帧合并到一个值不完全相同的列中

下面给出的是DF1

+--------+-----+------+
| NUM_ID | TIME|SG1_V |
+--------+-----+------+
|XXXXX01 |1001 |79.0  |
|XXXXX01 |1005 |88.0  |
|XXXXX01 |1010 |99.0  |
|XXXXX01 |1015 |null  |
|XXXXX01 |1020 |100.0 |
|XXXXX02 |1001 |81.0  |
|XXXXX02 |1010 |91.0  |
|XXXXX02 |1050 |93.0  |
|XXXXX02 |1060 |93.0  |
|XXXXX02 |1070 |93.0  |
+--------+-----+------+
下面是DF2

+---------+-----+------+
| NUM_ID  | TIME|SG2_V |
+---------+-----+------+
|XXXXX01  |1001 |  99.0|
|XXXXX01  |1003 |  22.0|
|XXXXX01  |1007 |  85.0|
|XXXXX01  |1011 |  1.0 |

|XXXXX02  |1001 |  22.0|
|XXXXX02  |1009 |  85.0|
|XXXXX02  |1048 |  1.0 |
|XXXXX02  |1052 |  99.0|
+---------+-----+------+
我必须在NUM_ID列上连接这两个DF,它应该是完全相同的,在TIME列上,它可能是/可能不是精确的值

DF2中的时间可能/可能不包含DF1中的精确值。如果值不精确,则必须使用可用的最高最近值(即DF2中的列值应为= 在查看以下所示的预期输出后,会更加清楚

+--------+-----+------+-----+------+
| NUM_ID | TIME|SG1_V | TIME|SG2_V |
+--------+-----+------+-----+------+
|XXXXX01 |1001 |79.0  |1001 |  99.0|
|XXXXX01 |1005 |88.0  |1003 |  22.0|
|XXXXX01 |1010 |99.0  |1007 |  85.0|
|XXXXX01 |1015 |null  |1011 |  1.0 |
|XXXXX01 |1020 |100.0 |1011 |  1.0 |

|XXXXX02 |1001 |81.0  |1001 |  22.0|
|XXXXX02 |1010 |91.0  |1009 |  85.0|
|XXXXX02 |1050 |93.0  |1048 |  1.0 |
|XXXXX02 |1060 |93.0  |1052 |  99.0|
|XXXXX02 |1070 |93.0  |1052 |  99.0|
+--------+-----+------+-----+------+
val finalSignals = finalABC.as("df1").join(finalXYZ.as("df2"), $"df1.NUM_ID" === $"df2.NUM_ID" && $"df2.TIME"  <= $"df1.TIME", "left").withColumn("rno", row_number.over(Window.partitionBy($"df1.NUM_ID", $"df1.TIME").orderBy($"df1.TIME" - $"df2.TIME"))).select(col("df1.NUM_ID").as("NUM_ID"),col("df1.TIME"),col("df2.NUM_ID").as("NUM_ID2"),col("df1.TIME").as("TIME2"),
col("rno")).filter("rno == 1")
对于NUM_ID xxxxx 01,DF1中的时间(1005)在DF2中不可用,因此它采用了小于1005的最近值(1003)

如何以这样一种方式进行连接:如果没有精确的值,则使用最近的值进行连接

感谢任何线索。
提前感谢。

如果您需要使用两个字段和其中一个字段的特定间隔进行连接,您可以执行以下操作:

  import org.apache.spark.sql.functions.when

  val spark = SparkSession.builder().master("local[1]").getOrCreate()

  val df1 : DataFrame = spark.createDataFrame(spark.sparkContext.parallelize(Seq(Row("XXXXX01",1001,79.0),
    Row("XXXXX01",1005,88.0),
    Row("XXXXX01",1010,99.0),
    Row("XXXXX01",1015, null),
    Row("XXXXX01",1020,100.0),
    Row("XXXXX02",1001,81.0))),
    StructType(Seq(StructField("NUM_ID", StringType, false), StructField("TIME", IntegerType, false), StructField("SG1_V", DoubleType, true))))

  val df2 : DataFrame = spark.createDataFrame(spark.sparkContext.parallelize(Seq(Row("XXXXX01",1001,79.0),
    Row("XXXXX01",1001, 99.0),
    Row("XXXXX01",1003, 22.0),
    Row("XXXXX01",1007, 85.1),
    Row("XXXXX01",1011, 1.0),
    Row("XXXXX02",1001,22.0))),
    StructType(Seq(StructField("NUM_ID", StringType, false), StructField("TIME", IntegerType, false), StructField("SG1_V", DoubleType, false))))

  val interval : Int = 10

  def main(args: Array[String]) : Unit = {
    df1.join(df2, ((df1("TIME")) - df2("TIME") > lit(interval)) && df1("NUM_ID") === df2("NUM_ID")).show()
  } 
其结果是:

+-------+----+-----+-------+----+-----+
| NUM_ID|TIME|SG1_V| NUM_ID|TIME|SG1_V|
+-------+----+-----+-------+----+-----+
|XXXXX01|1015| null|XXXXX01|1001| 79.0|
|XXXXX01|1015| null|XXXXX01|1001| 99.0|
|XXXXX01|1015| null|XXXXX01|1003| 22.0|
|XXXXX01|1020|100.0|XXXXX01|1001| 79.0|
|XXXXX01|1020|100.0|XXXXX01|1001| 99.0|
|XXXXX01|1020|100.0|XXXXX01|1003| 22.0|
|XXXXX01|1020|100.0|XXXXX01|1007| 85.1|
+-------+----+-----+-------+----+-----+

如果需要使用两个字段和其中一个字段的特定间隔进行连接,可以执行以下操作:

  import org.apache.spark.sql.functions.when

  val spark = SparkSession.builder().master("local[1]").getOrCreate()

  val df1 : DataFrame = spark.createDataFrame(spark.sparkContext.parallelize(Seq(Row("XXXXX01",1001,79.0),
    Row("XXXXX01",1005,88.0),
    Row("XXXXX01",1010,99.0),
    Row("XXXXX01",1015, null),
    Row("XXXXX01",1020,100.0),
    Row("XXXXX02",1001,81.0))),
    StructType(Seq(StructField("NUM_ID", StringType, false), StructField("TIME", IntegerType, false), StructField("SG1_V", DoubleType, true))))

  val df2 : DataFrame = spark.createDataFrame(spark.sparkContext.parallelize(Seq(Row("XXXXX01",1001,79.0),
    Row("XXXXX01",1001, 99.0),
    Row("XXXXX01",1003, 22.0),
    Row("XXXXX01",1007, 85.1),
    Row("XXXXX01",1011, 1.0),
    Row("XXXXX02",1001,22.0))),
    StructType(Seq(StructField("NUM_ID", StringType, false), StructField("TIME", IntegerType, false), StructField("SG1_V", DoubleType, false))))

  val interval : Int = 10

  def main(args: Array[String]) : Unit = {
    df1.join(df2, ((df1("TIME")) - df2("TIME") > lit(interval)) && df1("NUM_ID") === df2("NUM_ID")).show()
  } 
其结果是:

+-------+----+-----+-------+----+-----+
| NUM_ID|TIME|SG1_V| NUM_ID|TIME|SG1_V|
+-------+----+-----+-------+----+-----+
|XXXXX01|1015| null|XXXXX01|1001| 79.0|
|XXXXX01|1015| null|XXXXX01|1001| 99.0|
|XXXXX01|1015| null|XXXXX01|1003| 22.0|
|XXXXX01|1020|100.0|XXXXX01|1001| 79.0|
|XXXXX01|1020|100.0|XXXXX01|1001| 99.0|
|XXXXX01|1020|100.0|XXXXX01|1003| 22.0|
|XXXXX01|1020|100.0|XXXXX01|1007| 85.1|
+-------+----+-----+-------+----+-----+

简单的方法是使用Spark中的一个,row_number()或rank():

scala>spark.sql(“”)
|从中选择*(
|选择*,
|(按df1.NUM\u ID划分,按(df1.TIME-df2.TIME)划分的时间顺序)上的行数()rno
|从df1加入df2
|在df2.NUM_ID=df1.NUM_ID上

|df2.TIME简单的方法是使用Spark中的一个,row_number()或rank():

scala>spark.sql(“”)
|从中选择*(
|选择*,
|(按df1.NUM\u ID划分,按(df1.TIME-df2.TIME)划分的时间顺序)上的行数()rno
|从df1加入df2
|在df2.NUM_ID=df1.NUM_ID上

|df2.TIME上述解决方案是在将数据帧保存到配置单元表后加入数据帧

我尝试通过应用相同的逻辑连接两个数据帧,而不保存到配置单元表中,如下所示

+--------+-----+------+-----+------+
| NUM_ID | TIME|SG1_V | TIME|SG2_V |
+--------+-----+------+-----+------+
|XXXXX01 |1001 |79.0  |1001 |  99.0|
|XXXXX01 |1005 |88.0  |1003 |  22.0|
|XXXXX01 |1010 |99.0  |1007 |  85.0|
|XXXXX01 |1015 |null  |1011 |  1.0 |
|XXXXX01 |1020 |100.0 |1011 |  1.0 |

|XXXXX02 |1001 |81.0  |1001 |  22.0|
|XXXXX02 |1010 |91.0  |1009 |  85.0|
|XXXXX02 |1050 |93.0  |1048 |  1.0 |
|XXXXX02 |1060 |93.0  |1052 |  99.0|
|XXXXX02 |1070 |93.0  |1052 |  99.0|
+--------+-----+------+-----+------+
val finalSignals = finalABC.as("df1").join(finalXYZ.as("df2"), $"df1.NUM_ID" === $"df2.NUM_ID" && $"df2.TIME"  <= $"df1.TIME", "left").withColumn("rno", row_number.over(Window.partitionBy($"df1.NUM_ID", $"df1.TIME").orderBy($"df1.TIME" - $"df2.TIME"))).select(col("df1.NUM_ID").as("NUM_ID"),col("df1.TIME"),col("df2.NUM_ID").as("NUM_ID2"),col("df1.TIME").as("TIME2"),
col("rno")).filter("rno == 1")

val finalSignals=finalABC.as(“df1”).join(finalXYZ.as(“df2”),$“df1.NUM_ID”===$“df2.NUM_ID”&&&&$“df2.TIME”上述解决方案在将数据帧保存到配置单元表中后加入数据帧

我尝试通过应用相同的逻辑连接两个数据帧,而不保存到配置单元表中,如下所示

+--------+-----+------+-----+------+
| NUM_ID | TIME|SG1_V | TIME|SG2_V |
+--------+-----+------+-----+------+
|XXXXX01 |1001 |79.0  |1001 |  99.0|
|XXXXX01 |1005 |88.0  |1003 |  22.0|
|XXXXX01 |1010 |99.0  |1007 |  85.0|
|XXXXX01 |1015 |null  |1011 |  1.0 |
|XXXXX01 |1020 |100.0 |1011 |  1.0 |

|XXXXX02 |1001 |81.0  |1001 |  22.0|
|XXXXX02 |1010 |91.0  |1009 |  85.0|
|XXXXX02 |1050 |93.0  |1048 |  1.0 |
|XXXXX02 |1060 |93.0  |1052 |  99.0|
|XXXXX02 |1070 |93.0  |1052 |  99.0|
+--------+-----+------+-----+------+
val finalSignals = finalABC.as("df1").join(finalXYZ.as("df2"), $"df1.NUM_ID" === $"df2.NUM_ID" && $"df2.TIME"  <= $"df1.TIME", "left").withColumn("rno", row_number.over(Window.partitionBy($"df1.NUM_ID", $"df1.TIME").orderBy($"df1.TIME" - $"df2.TIME"))).select(col("df1.NUM_ID").as("NUM_ID"),col("df1.TIME"),col("df2.NUM_ID").as("NUM_ID2"),col("df1.TIME").as("TIME2"),
col("rno")).filter("rno == 1")

val finalSignals=finalABC.as(“df1”).join(finalXYZ.as(“df2”),$“df1.NUM_ID”==$“df2.NUM_ID”&&$“df2.TIME”@EmiCareOfCell44考虑的基本时间列为DF1。DF1时间列中的所有值都应存在于生成的DF中。在上述解决方案中,时间1005、1010缺失??因此,您可以使用左_联接,并仅从右侧获取与两个字段联接函数匹配的行。如果您需要示例I可以更新answer@EmiCareOfCell44-所考虑的基本时间列为DF1。DF1时间列中的所有值都应存在于结果DF中。在上述解决方案中,时间1005、1010缺失??因此,您可以使用左_联接,仅从右侧获取与两个字段联接函数匹配的行。如果需要一个例子,我可以更新answer@mazaneicha-由于有两个列的名称为NUM_ID和TIME,我如何别名不同的名称并从结果中仅选择少数列?我尝试过使用别名,但最终出现错误
org.apache.spark.sql.AnalysisException:引用“NUM_ID”不明确,可能是:T.NUM_ID,T.NUM_ID;
U在投影中使用列列表而不是
*
应该可以,不是吗?像
选择df1.NUM\u ID作为NUM\u ID1,df2.NUM\u ID作为NUM\u ID2,…从df1加入df2…
@mazaneicha-既然有两个列的名称都是NUM\u ID和TIME,我如何别名不同的名称并从结果中只选择少数列?我尝试了别名,但没有成功最后出现错误
org.apache.spark.sql.AnalysisException:引用'NUM_ID'不明确,可能是:T.NUM_ID,T.NUM_ID;
在投影中使用列列表而不是
*
应该可以工作,不?像
选择df1.NUM_ID作为NUM_ID1,df2.NUM_ID作为NUM_ID2,…从df1加入df2…
请检查this@mazan艾卡,请检查一下