RDD scala spark中的完全外部连接

RDD scala spark中的完全外部连接,scala,apache-spark,rdd,Scala,Apache Spark,Rdd,我有以下两个文件: 文件1 文件2 现在,我希望联接列在字段1中具有相同的值。 我想要这样的东西: 0000005 崎村______ 50 F 82 79 16 21 80 0000003 杉山______ 26 F 30 50 71 36 30 0000007 梶川______ 42 F 50 2 33 15 62 您可以使用数据帧连接概念而不是RDD连接。那很容易。您可以参考下面的示例代码。希望这对你有帮助。 我认为您的数据格式与您上面提到的相同。如果是CSV或任何其他格式,则可以跳过步

我有以下两个文件: 文件1

文件2

现在,我希望联接列在字段1中具有相同的值。 我想要这样的东西:

0000005 崎村______ 50 F 82 79 16 21 80
0000003 杉山______ 26 F 30 50 71 36 30
0000007 梶川______ 42 F 50 2  33 15 62

您可以使用数据帧连接概念而不是RDD连接。那很容易。您可以参考下面的示例代码。希望这对你有帮助。 我认为您的数据格式与您上面提到的相同。如果是CSV或任何其他格式,则可以跳过步骤2,并根据数据格式更新步骤1。若您需要RDD格式的输出,那个么您可以使用步骤5,否则您可以根据代码段中提到的注释忽略它。 为了可读性,我修改了数据,比如A,B,C


我找到了解决方案,以下是我的代码:

val rddPair1 = logData1.map { x =>
var data = x.split(" ")
var index = 0
var value=""
var key = data(index)
    for( i <- 0 to data.length-1){
        if(i!=index){
            value+= data(i)+" "
        }
    }
new Tuple2(key, value.trim)
}



val rddPair2 = logData2.map { x =>
var data = x.split(" ")
var index = 0
var value=""
var key = data(index)
    for( i <- 0 to data.length-1){
        if(i!=index){
            value+= data(i)+" "
        }
    }
new Tuple2(key, value.trim)
}
rddPair1.join(rddPair2).collect().foreach(f =>{
println(f._1+" "+f._2._1+" "+f._2._2
)})
}
结果:

0000003 杉山______ 26 F 30 50 71 36 30 0000005 崎村______ 50 F 82 79 16 21 80 0000007 梶川______ 42 F 50 2 33 15 62
您应该能够简单地使用join?您可以显示您的代码吗?这是一个解决方案,但您可以显示您的代码。谢谢
//Step1: Loading file1 and file2 to corresponding DataFrame in text format

val df1  = spark.read.format("text").load("<path of file1>")
val df2  = spark.read.format("text").load("<path of file2>")

//Step2: Spliting  single column "value" into multiple column for join Key

val file1 = ((((df1.withColumn("col1", split($"value", " ")(0)))
                        .withColumn("col2", split($"value", " ")(1)))
                        .withColumn("col3", split($"value", " ")(2)))
                        .withColumn("col4", split($"value", " ")(3)))
                        .select("col1","col2", "col3", "col4")

/* 
+-------+-------+----+----+                                                     
|col1   |col2   |col3|col4|
+-------+-------+----+----+
|0000003|A______|26  |F   |
|0000005|B______|50  |F   |
|0000007|C______|42  |F   |
+-------+-------+----+----+

*/

val file2 =   ((((((df2.withColumn("col1", split($"value", " ")(0)))
                            .withColumn("col2", split($"value", " ")(1)))
                            .withColumn("col3", split($"value", " ")(2)))
                            .withColumn("col4", split($"value", " ")(3)))
                            .withColumn("col5", split($"value", " ")(4)))
                            .withColumn("col6", split($"value", " ")(5)))
                            .select("col1","col2", "col3", "col4","col5","col6")

/*
+-------+----+----+----+----+----+
|col1   |col2|col3|col4|col5|col6|
+-------+----+----+----+----+----+
|0000005|82  |79  |16  |21  |80  |
|0000001|46  |39  |8   |5   |21  |
|0000004|58  |71  |20  |10  |6   |
|0000009|60  |89  |33  |18  |6   |
|0000003|30  |50  |71  |36  |30  |
|0000007|50  |2   |33  |15  |62  |
+-------+----+----+----+----+----+

*/

//Step3: you can do alias to refer column name with aliases to  increase readablity

val file01 = file1.as("f1")
val file02 = file2.as("f2")

//Step4: Joining files on Key
file01.join(file02,col("f1.col1") === col("f2.col1"))

/*
+-------+-------+----+----+-------+----+----+----+----+----+                    
|col1   |col2   |col3|col4|col1   |col2|col3|col4|col5|col6|
+-------+-------+----+----+-------+----+----+----+----+----+
|0000005|B______|50  |F   |0000005|82  |79  |16  |21  |80  |
|0000003|A______|26  |F   |0000003|30  |50  |71  |36  |30  |
|0000007|C______|42  |F   |0000007|50  |2   |33  |15  |62  |
+-------+-------+----+----+-------+----+----+----+----+----+
*/

// Step5: if you want file data in RDD format the  you can use below command

file01.join(file02,col("f1.col1") === col("f2.col1")).rdd.collect

/* 
Array[org.apache.spark.sql.Row] = Array([0000005,B______,50,F,0000005,82,79,16,21,80], [0000003,A______,26,F,0000003,30,50,71,36,30], [0000007,C______,42,F,0000007,50,2,33,15,62])
*/
val rddPair1 = logData1.map { x =>
var data = x.split(" ")
var index = 0
var value=""
var key = data(index)
    for( i <- 0 to data.length-1){
        if(i!=index){
            value+= data(i)+" "
        }
    }
new Tuple2(key, value.trim)
}



val rddPair2 = logData2.map { x =>
var data = x.split(" ")
var index = 0
var value=""
var key = data(index)
    for( i <- 0 to data.length-1){
        if(i!=index){
            value+= data(i)+" "
        }
    }
new Tuple2(key, value.trim)
}
rddPair1.join(rddPair2).collect().foreach(f =>{
println(f._1+" "+f._2._1+" "+f._2._2
)})
}
0000003 杉山______ 26 F 30 50 71 36 30 0000005 崎村______ 50 F 82 79 16 21 80 0000007 梶川______ 42 F 50 2 33 15 62