Apache spark 如何从Pyspark dataframe中的列表值创建列

Apache spark 如何从Pyspark dataframe中的列表值创建列,apache-spark,dataframe,pyspark,Apache Spark,Dataframe,Pyspark,我有一个pyspark数据帧,它看起来像这样: Subscription_id Subscription parameters 5516 ["'catchupNotificationsEnabled': True","'newsNotificationsEnabled': True","'autoDownloadsEnabled': False"] 我需要输出数据帧为: Subscription_id catchupNotificationsEnabled newsNot

我有一个pyspark数据帧,它看起来像这样:

Subscription_id Subscription parameters
5516            ["'catchupNotificationsEnabled': True","'newsNotificationsEnabled': True","'autoDownloadsEnabled': False"]
我需要输出数据帧为:

Subscription_id catchupNotificationsEnabled newsNotificationsEnabled    autoDownloadsEnabled
5516    True    True    False
如何在Pyspark中实现这一点?我尝试了几个使用UDF的选项,但没有成功

非常感谢您的帮助

假设“订阅参数”列为ArrayType()

首先创建数据帧

df = sc.createDataFrame([Row(Subscription_id=5516,
                         Subscription_parameters=["'catchupNotificationsEnabled': True",
"'newsNotificationsEnabled': True", "'autoDownloadsEnabled': False"])])
通过简单的索引将此数组拆分为三列:

df = df.select("Subscription_id", 
      F.col("Subscription_parameters")[0].alias("catchupNotificationsEnabled"),
      F.col("Subscription_parameters")[1].alias("newsNotificationsEnabled"),
      F.col("Subscription_parameters")[2].alias("autoDownloadsEnabled"))
现在数据帧已正确拆分,每个新列都包含一个字符串,例如“'CathupNotificationsEnabled':True”:

然后我建议通过检查列值是否包含“True”来更新列值

生成的数据帧与预期的一样

+---------------+---------------------------+------------------------+--------------------+
|Subscription_id|catchupNotificationsEnabled|newsNotificationsEnabled|autoDownloadsEnabled|
+---------------+---------------------------+------------------------+--------------------+
|           5516|                       true|                    true|               false|
+---------------+---------------------------+------------------------+--------------------+

PS:如果该列不是ArrayType(),您可能需要稍微修改一下代码。

您可以使用如下内容

>>> df.show()
+---------------+-----------------------+
|Subscription_id|Subscription_parameters|
+---------------+-----------------------+
|           5516|   ["'catchupNotific...|
+---------------+-----------------------+

>>> 
>>> df1 = df.select('Subscription_id')
>>> 
>>> data = df.select('Subscription_parameters').rdd.map(list).collect()
>>> data = [i[0][1:-1].split(',') for i in data]
>>> data = {i.split(':')[0][2:-1]:i.split(':')[1].strip()[:-1] for i in data[0]}
>>> 
>>> df2 = spark.createDataFrame(sc.parallelize([data]))
>>> 
>>> df3 = df1.crossJoin(df2)
>>> 
>>> df3.show()
+---------------+--------------------+---------------------------+------------------------+
|Subscription_id|autoDownloadsEnabled|catchupNotificationsEnabled|newsNotificationsEnabled|
+---------------+--------------------+---------------------------+------------------------+
|           5516|               False|                       True|                    True|
+---------------+--------------------+---------------------------+------------------------+

您提前知道这些键吗?@pault是的,只有这3个参数CathupNotificationsEnabled、newsNotificationsEnabled和autoDownloadsEnabled,对于不同的记录,它们具有不同的True和False值。您可以提供数据帧的模式吗?“订阅参数”的类型是:StructType()还是ArrayType()?(或其他)谢谢你们的帮助。这两种解决方案对我都有效!
df = df.withColumn('catchupNotificationsEnabled',
                  F.when(F.col("catchupNotificationsEnabled").contains("True"), True).otherwise(False))\
        .withColumn('newsNotificationsEnabled',
                   F.when(F.col("newsNotificationsEnabled").contains("True"), True).otherwise(False))\
        .withColumn('autoDownloadsEnabled',
                   F.when(F.col("autoDownloadsEnabled").contains("True"), True).otherwise(False))
+---------------+---------------------------+------------------------+--------------------+
|Subscription_id|catchupNotificationsEnabled|newsNotificationsEnabled|autoDownloadsEnabled|
+---------------+---------------------------+------------------------+--------------------+
|           5516|                       true|                    true|               false|
+---------------+---------------------------+------------------------+--------------------+
>>> df.show()
+---------------+-----------------------+
|Subscription_id|Subscription_parameters|
+---------------+-----------------------+
|           5516|   ["'catchupNotific...|
+---------------+-----------------------+

>>> 
>>> df1 = df.select('Subscription_id')
>>> 
>>> data = df.select('Subscription_parameters').rdd.map(list).collect()
>>> data = [i[0][1:-1].split(',') for i in data]
>>> data = {i.split(':')[0][2:-1]:i.split(':')[1].strip()[:-1] for i in data[0]}
>>> 
>>> df2 = spark.createDataFrame(sc.parallelize([data]))
>>> 
>>> df3 = df1.crossJoin(df2)
>>> 
>>> df3.show()
+---------------+--------------------+---------------------------+------------------------+
|Subscription_id|autoDownloadsEnabled|catchupNotificationsEnabled|newsNotificationsEnabled|
+---------------+--------------------+---------------------------+------------------------+
|           5516|               False|                       True|                    True|
+---------------+--------------------+---------------------------+------------------------+