Dataframe 通过列中的唯一值连接两个pyspark数据帧

Dataframe 通过列中的唯一值连接两个pyspark数据帧,dataframe,pyspark,Dataframe,Pyspark,比方说,我有两个pyspark数据帧,用户和商店。下面显示了两个数据帧的一些示例行 用户数据帧: +---------+-------------+---------+ | idvalue | day-of-week | geohash | +---------+-------------+---------+ | id-1 | 2 | gcutjjn | | id-1 | 3 | gcutjjn | | id-1 |

比方说,我有两个pyspark数据帧,
用户
商店
。下面显示了两个数据帧的一些示例行

用户数据帧:

+---------+-------------+---------+
| idvalue | day-of-week | geohash |
+---------+-------------+---------+
| id-1    |           2 | gcutjjn |
| id-1    |           3 | gcutjjn |
| id-1    |           5 | gcutjht |
+---------+-------------+---------+
存储数据帧

+---------+-----------+---------+
| shop-id | shop-name | geohash |
+---------+-----------+---------+
| sid-1   | kfc       | gcutjjn |
| sid-2   | mcd       | gcutjhq |
| sid-3   | starbucks | gcutjht |
+---------+-----------+---------+
我需要在geohash列上连接这两个数据帧。我当然可以做一个简单的equi连接,但是用户的数据帧是巨大的,包含数十亿行,GeoHash很可能在idvalues内和idvalues之间重复。所以,我想知道是否有一种方法可以在users数据框架中的唯一Geohash和shops数据框架中的Geohash上执行联接。如果我们能够做到这一点,那么就很容易复制shops条目,以便在结果数据帧中匹配geohash


可能这可以通过pandas udf实现,我将在users.idvalue上执行groupby,通过仅从组中获取第一行(因为组中的所有ID都是相同的)并创建一个单行数据帧,与udf中的商店进行连接。从逻辑上看,这应该是可行的,但在性能方面并不确定,因为udf通常比spark本机转换慢。欢迎任何想法。

如果可能,您可以使用pyspark SQL选择不同的geohash并创建临时表,这是一个想法。然后从这个表而不是数据帧中加入。

您说过您的用户数据帧很大,并且“GeoHash可能会在idvalues内和idvalues之间重复”。但是,您没有提到您的数据框中是否存在重复的Geohash

如果后者中没有重复的哈希,我认为简单的连接可以解决您的问题:

val userDf = Seq(("id-1",2,"gcutjjn"),("id-2",2,"gcutjjn"),("id-1",3,"gcutjjn"),("id-1",5,"gcutjht")).toDF("idvalue","day_of_week","geohash")
val shopDf = Seq(("sid-1","kfc","gcutjjn"),("sid-2","mcd","gcutjhq"),("sid-3","starbucks","gcutjht")).toDF("shop_id","shop_name","geohash")

userDf.show
+-------+-----------+-------+
|idvalue|day_of_week|geohash|
+-------+-----------+-------+
|   id-1|          2|gcutjjn|
|   id-2|          2|gcutjjn|
|   id-1|          3|gcutjjn|
|   id-1|          5|gcutjht|
+-------+-----------+-------+

shopDf.show
+-------+---------+-------+
|shop_id|shop_name|geohash|
+-------+---------+-------+
|  sid-1|      kfc|gcutjjn|
|  sid-2|      mcd|gcutjhq|
|  sid-3|starbucks|gcutjht|
+-------+---------+-------+

shopDf
    .join(userDf,Seq("geohash"),"inner")
    .groupBy($"geohash",$"shop_id",$"idvalue")
    .agg(collect_list($"day_of_week").alias("days"))
    .show
+-------+-------+-------+------+
|geohash|shop_id|idvalue|  days|
+-------+-------+-------+------+
|gcutjjn|  sid-1|   id-1|[2, 3]|
|gcutjht|  sid-3|   id-1|   [5]|
|gcutjjn|  sid-1|   id-2|   [2]|
+-------+-------+-------+------+
如果在shops数据帧中有重复的哈希值,一种可能的方法是从shops数据帧中删除这些重复的哈希(如果您的需求允许),然后执行相同的联接操作

val userDf = Seq(("id-1",2,"gcutjjn"),("id-2",2,"gcutjjn"),("id-1",3,"gcutjjn"),("id-1",5,"gcutjht")).toDF("idvalue","day_of_week","geohash")
val shopDf = Seq(("sid-1","kfc","gcutjjn"),("sid-2","mcd","gcutjhq"),("sid-3","starbucks","gcutjht"),("sid-4","burguer king","gcutjjn")).toDF("shop_id","shop_name","geohash")

userDf.show
+-------+-----------+-------+
|idvalue|day_of_week|geohash|
+-------+-----------+-------+
|   id-1|          2|gcutjjn|
|   id-2|          2|gcutjjn|
|   id-1|          3|gcutjjn|
|   id-1|          5|gcutjht|
+-------+-----------+-------+

shopDf.show
+-------+------------+-------+
|shop_id|   shop_name|geohash|
+-------+------------+-------+
|  sid-1|         kfc|gcutjjn|  <<  Duplicated geohash
|  sid-2|         mcd|gcutjhq|
|  sid-3|   starbucks|gcutjht|
|  sid-4|burguer king|gcutjjn|  <<  Duplicated geohash
+-------+------------+-------+

//Dataframe with hashes to exclude:
val excludedHashes = shopDf.groupBy("geohash").count.filter("count > 1")
excludedHashes.show
+-------+-----+
|geohash|count|
+-------+-----+
|gcutjjn|    2|
+-------+-----+

//Create a dataframe of shops without the ones with duplicated hashes
val cleanShopDf = shopDf.join(excludedHashes,Seq("geohash"),"left_anti")
cleanShopDf.show
+-------+-------+---------+
|geohash|shop_id|shop_name|
+-------+-------+---------+
|gcutjhq|  sid-2|      mcd|
|gcutjht|  sid-3|starbucks|
+-------+-------+---------+

//Perform the same join operation
cleanShopDf.join(userDf,Seq("geohash"),"inner")
    .groupBy($"geohash",$"shop_id",$"idvalue")
    .agg(collect_list($"day_of_week").alias("days"))
    .show
+-------+-------+-------+----+
|geohash|shop_id|idvalue|days|
+-------+-------+-------+----+
|gcutjht|  sid-3|   id-1| [5]|
+-------+-------+-------+----+
val userDf=Seq((“id-1”,2,“gcutjn”),(“id-2”,2,“gcutjn”),(“id-1”,3,“gcutjn”),(“id-1”,5,“gcutjht”).toDF(“idvalue”,“周中的第几天”,“geohash”)
val shopDf=Seq((“sid-1”、“肯德基”、“gcutjjn”)、(“sid-2”、“mcd”、“gcutjhq”)、(“sid-3”、“星巴克”、“gcutjht”)、(“sid-4”、“伯格尔金”、“gcutjjn”)。toDF(“店铺id”、“店铺名称”、“geohash”)
userDf.show
+-------+-----------+-------+
|idvalue |周中的天| geohash|
+-------+-----------+-------+
|id-1 | 2 | gcutjjn|
|id-2 | 2 | gcutjjn|
|id-1 | 3 | gcutjjn|
|id-1 | 5 | gcutjht|
+-------+-----------+-------+
shopDf.show
+-------+------------+-------+
|店铺id |店铺名称| geohash|
+-------+------------+-------+

|sid-1 | kfc | gcutjjn |您显示的示例用户df有重复的行。如果您的实际数据是这样的,那么您可以在users df中删除重复的行,然后执行joinActual,只有geohash列有重复的条目,其余的都是不同的,所以我不能删除行。我已经更新了表以消除混乱。因此,您必须保留users df中的所有行,并且您的shops数据没有重复的geohash。执行联接的方法是联接。这个问题已经简化为它的基本形式。