Scala 将spark数据帧列值替换为随机值(例如UUID)
我有一个带有列user_标记的数据帧,我希望它有新的随机UUID值,我该怎么做Scala 将spark数据帧列值替换为随机值(例如UUID),scala,apache-spark,Scala,Apache Spark,我有一个带有列user_标记的数据帧,我希望它有新的随机UUID值,我该怎么做 -------------------------------------- | user_tag | pref_code | name | -------------------------------------- | abc123 | Reg | Richard | | abc123 | Reg | Mort | | abc123 |
--------------------------------------
| user_tag | pref_code | name |
--------------------------------------
| abc123 | Reg | Richard |
| abc123 | Reg | Mort |
| abc123 | Disc | Jack |
我想为spark中的用户标签生成randomUUID。拥有
-------------------------------------------------------------------
| user_tag | pref_code | name |
-------------------------------------------------------------------
| af3fb8b8-7ceb-4cec-ac27-2a034bb44bb9 | Reg | Richard |
| snc22fls-2cgb-sas2-hc26-43d35ggg4522 | Reg | Mort |
| afgdw8b8-4fss-ycec-ycd7-haj3jbbj4bj9 | Disc | Jack |
我尝试过这样做:但它会为每一行生成相同的UUID
val withUUID = dataFrame.withColumn("user_tag",
when(col("user_tag") === "abc123", randomUUID.toString).otherwise(col("user_tag")))
您可以尝试创建
udf
,然后在语句时调用案例中的udf
示例:
结果:
基本上,everymatch将调用udf,然后生成随机UUID您可以尝试创建udf
然后在语句时调用案例中的udf
示例:
结果:
基本上everymatch将调用udf,然后生成randomUUID
val rand_UUID = udf(() => java.util.UUID.randomUUID().toString) //udf to generate randomUUID
val df=Seq(("abc123","Reg","Richard"),("abc123","Reg","Mort"))
.toDF("user_tag","pref_code","name")
df.withColumn("user_tag",when('user_tag === "abc123",rand_UUID())
.otherwise('user_tag))
.show(false)
+------------------------------------+---------+-------+
|user_tag |pref_code|name |
+------------------------------------+---------+-------+
|e0b3c917-dcc5-4c42-bfe3-32af18b1cfec|Reg |Richard|
|90098d7d-8dc7-42df-a89b-5bd7f2c5cd99|Reg |Mort |
+------------------------------------+---------+-------+