scala-通过键函数连接Spark RDD
我正在运行ApacheSpark2.11并使用Scala。有没有办法通过键的函数连接两个RDDscala-通过键函数连接Spark RDD,scala,apache-spark,join,rdd,Scala,Apache Spark,Join,Rdd,我正在运行ApacheSpark2.11并使用Scala。有没有办法通过键的函数连接两个RDD 具体地说,如果我有一个RDD[(K,V1),(K-x,V2),(K+x,V3)],我想生成一个RDD[(K,(V1,V2)),(K-x,(V2)),(K+x,(V1,V3))],其中连接函数是f(K)=K-x 不确定您的输入输出,希望下面的示例会有所帮助 示例1 import org.apache.spark.sql.functions._ import sqlContext.implicits._
具体地说,如果我有一个RDD
[(K,V1),(K-x,V2),(K+x,V3)]
,我想生成一个RDD[(K,(V1,V2)),(K-x,(V2)),(K+x,(V1,V3))]
,其中连接函数是f(K)=K-x 不确定您的输入输出,希望下面的示例会有所帮助
示例1
import org.apache.spark.sql.functions._
import sqlContext.implicits._
val df1 = Seq(("foo", "bar","too","aaa"), ("bar", "bar","aaa","foo"), ("aaa", "bbb","ccc","ddd")).toDF("k1","v1","v2","v3")
val df2 = Seq(("aaa", "bbb","ddd"), ("www", "eee","rrr"), ("jjj", "rrr","www")).toDF("k1","v1","v2")
//df1 = df1.withColumn("id", monotonically_increasing_id())
//df2 = df2.withColumn("id", monotonically_increasing_id())
df1.show()
df2.show()
val df3 = df2.join(df1, Seq("k1"), "outer")
// You can use outer ,inner ,right,left any join as per fit in your requirmens
df3.show()
结果:
+---+---+---+---+
| k1| v1| v2| v3|
+---+---+---+---+
|foo|bar|too|aaa|
|bar|bar|aaa|foo|
|aaa|bbb|ccc|ddd|
+---+---+---+---+
+---+---+---+
| k1| v1| v2|
+---+---+---+
|aaa|bbb|ddd|
|www|eee|rrr|
|jjj|rrr|www|
+---+---+---+
+---+----+----+----+----+----+
| k1| v1| v2| v1| v2| v3|
+---+----+----+----+----+----+
|jjj| rrr| www|null|null|null|
|aaa| bbb| ddd| bbb| ccc| ddd|
|bar|null|null| bar| aaa| foo|
|foo|null|null| bar| too| aaa|
|www| eee| rrr|null|null|null|
+---+----+----+----+----+----+
import org.apache.spark.sql.functions._
import sqlContext.implicits._
df1: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 2 more fields]
df2: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 1 more field]
df3: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 4 more fields]
import org.apache.spark.sql.functions._
import sqlContext.implicits._
val df12 = sc.parallelize(Seq(("1001","vaquar"),("2001","khan1"))).toDF("Key" ,"Value")
val df22 = sc.parallelize(Seq(("1001","Noman"),("2001","khan2"))).toDF("Key" ,"Value")
df12.show()
df22.show()
val df33 = df22.join(df12, Seq("Key"), "left_outer")
df33.show()
+----+------+
| Key| Value|
+----+------+
|1001|vaquar|
|2001| khan1|
+----+------+
+----+-----+
| Key|Value|
+----+-----+
|1001|Noman|
|2001|khan2|
+----+-----+
+----+-----+------+
| Key|Value| Value|
+----+-----+------+
|2001|khan2| khan1|
|1001|Noman|vaquar|
+----+-----+------+
import org.apache.spark.sql.functions._
import sqlContext.implicits._
rdd1: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df12: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df22: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df33: org.apache.spark.sql.DataFrame = [Key: string, Value: string ... 1 more field]
示例2:
+---+---+---+---+
| k1| v1| v2| v3|
+---+---+---+---+
|foo|bar|too|aaa|
|bar|bar|aaa|foo|
|aaa|bbb|ccc|ddd|
+---+---+---+---+
+---+---+---+
| k1| v1| v2|
+---+---+---+
|aaa|bbb|ddd|
|www|eee|rrr|
|jjj|rrr|www|
+---+---+---+
+---+----+----+----+----+----+
| k1| v1| v2| v1| v2| v3|
+---+----+----+----+----+----+
|jjj| rrr| www|null|null|null|
|aaa| bbb| ddd| bbb| ccc| ddd|
|bar|null|null| bar| aaa| foo|
|foo|null|null| bar| too| aaa|
|www| eee| rrr|null|null|null|
+---+----+----+----+----+----+
import org.apache.spark.sql.functions._
import sqlContext.implicits._
df1: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 2 more fields]
df2: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 1 more field]
df3: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 4 more fields]
import org.apache.spark.sql.functions._
import sqlContext.implicits._
val df12 = sc.parallelize(Seq(("1001","vaquar"),("2001","khan1"))).toDF("Key" ,"Value")
val df22 = sc.parallelize(Seq(("1001","Noman"),("2001","khan2"))).toDF("Key" ,"Value")
df12.show()
df22.show()
val df33 = df22.join(df12, Seq("Key"), "left_outer")
df33.show()
+----+------+
| Key| Value|
+----+------+
|1001|vaquar|
|2001| khan1|
+----+------+
+----+-----+
| Key|Value|
+----+-----+
|1001|Noman|
|2001|khan2|
+----+-----+
+----+-----+------+
| Key|Value| Value|
+----+-----+------+
|2001|khan2| khan1|
|1001|Noman|vaquar|
+----+-----+------+
import org.apache.spark.sql.functions._
import sqlContext.implicits._
rdd1: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df12: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df22: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df33: org.apache.spark.sql.DataFrame = [Key: string, Value: string ... 1 more field]
结果:
+---+---+---+---+
| k1| v1| v2| v3|
+---+---+---+---+
|foo|bar|too|aaa|
|bar|bar|aaa|foo|
|aaa|bbb|ccc|ddd|
+---+---+---+---+
+---+---+---+
| k1| v1| v2|
+---+---+---+
|aaa|bbb|ddd|
|www|eee|rrr|
|jjj|rrr|www|
+---+---+---+
+---+----+----+----+----+----+
| k1| v1| v2| v1| v2| v3|
+---+----+----+----+----+----+
|jjj| rrr| www|null|null|null|
|aaa| bbb| ddd| bbb| ccc| ddd|
|bar|null|null| bar| aaa| foo|
|foo|null|null| bar| too| aaa|
|www| eee| rrr|null|null|null|
+---+----+----+----+----+----+
import org.apache.spark.sql.functions._
import sqlContext.implicits._
df1: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 2 more fields]
df2: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 1 more field]
df3: org.apache.spark.sql.DataFrame = [k1: string, v1: string ... 4 more fields]
import org.apache.spark.sql.functions._
import sqlContext.implicits._
val df12 = sc.parallelize(Seq(("1001","vaquar"),("2001","khan1"))).toDF("Key" ,"Value")
val df22 = sc.parallelize(Seq(("1001","Noman"),("2001","khan2"))).toDF("Key" ,"Value")
df12.show()
df22.show()
val df33 = df22.join(df12, Seq("Key"), "left_outer")
df33.show()
+----+------+
| Key| Value|
+----+------+
|1001|vaquar|
|2001| khan1|
+----+------+
+----+-----+
| Key|Value|
+----+-----+
|1001|Noman|
|2001|khan2|
+----+-----+
+----+-----+------+
| Key|Value| Value|
+----+-----+------+
|2001|khan2| khan1|
|1001|Noman|vaquar|
+----+-----+------+
import org.apache.spark.sql.functions._
import sqlContext.implicits._
rdd1: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df12: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df22: org.apache.spark.sql.DataFrame = [Key: string, Value: string]
df33: org.apache.spark.sql.DataFrame = [Key: string, Value: string ... 1 more field]
以下是一个例子:
//Just to simulate functional join
val appendZero = ((id: String) => id + "0")
val rdd1 = sc.parallelize(Seq(("100","Tom"),("200","Rick")))
val rdd2 = sc.parallelize(Seq(("1000","phone1000"),("2000","phone2000")))
val rdd3 = sc.parallelize(Seq(("1000","addr1000"),("2000","addr2000")))
rdd1.map(x => (appendZero(x._1),(x._2))).join(rdd2).join(rdd3).map {
case(k, ((v1, v2), v3)) => ((k,(v1,v2),(k,v2),(k,v1,v3)))
}.collect.foreach(println)
输出:
(2000,(Rick,phone2000),(2000,phone2000),(2000,Rick,addr2000))
(1000,(Tom,phone1000),(1000,phone1000),(1000,Tom,addr1000))
如果我正确理解您的需求,可以使用函数\ux
(可逆)的反方向上的leftOuterJoin
来实现,如下例所示:
val x = 5
val f: (Int) => Int = _ - x
val fInverse: (Int) => Int = _ + x
val rdd = sc.parallelize(Seq(
(100, "V1"),
(100 - x, "V2"),
(100 + x, "V3")
))
rdd.
leftOuterJoin(rdd.map{ case (k, v) => (fInverse(k), v) }).
map{ case(k, (u, v)) => (k, (u, v.getOrElse("")))}.
collect
// res1: Array[(Int, (String, String))] = Array((105,(V3,V1)), (100,(V1,V2)), (95,(V2,"")))
请提供一些输入和预期的输出数据,因为这可能会让您更快地得到答案