pyspark中的数据修改

pyspark中的数据修改,pyspark,Pyspark,我正在使用spark 2.0.1和python-2.7修改和展平一些嵌套的JSON数据 原始数据(json格式) 使用withColumn和udf函数,我能够将原始数据展平到dataframe,如下所示 --------------------------------------------------------------------- | created | class | sub_class | meta | interests

我正在使用spark 2.0.1和python-2.7修改和展平一些嵌套的JSON数据

原始数据(json格式)

使用
withColumn
udf
函数,我能够将原始数据展平到dataframe,如下所示

---------------------------------------------------------------------
| created                | class   | sub_class  | meta       | interests                                                 | 
---------------------------------------------------------------------
|28-12-2001T12:02:01.143 | Class_A | SubClass_B |'some-info' | "{key1: 'value1', 'key2':'value2', ..., 'keyN':'valueN'}" |
---------------------------------------------------------------------
现在我想根据兴趣列将这一行转换/拆分为多行。我怎样才能做到这一点?

期望输出

---------------------------------------------------------------------
| created                 | class   | sub_class  | meta        | key  | value  |  
---------------------------------------------------------------------
| 28-12-2001T12:02:01.143 | Class_A | SubClass_B | 'some-info' | key1 | value1 |
---------------------------------------------------------------------
| 28-12-2001T12:02:01.143 | Class_A | SubClass_B | 'some-info' | key2 | value2 |
---------------------------------------------------------------------
| 28-12-2001T12:02:01.143 | Class_A | SubClass_B | 'some-info' | keyN | valueN |
---------------------------------------------------------------------
谢谢

使用explode

以下是完整的示例(主要是获取数据):

---------------------------------------------------------------------
| created                 | class   | sub_class  | meta        | key  | value  |  
---------------------------------------------------------------------
| 28-12-2001T12:02:01.143 | Class_A | SubClass_B | 'some-info' | key1 | value1 |
---------------------------------------------------------------------
| 28-12-2001T12:02:01.143 | Class_A | SubClass_B | 'some-info' | key2 | value2 |
---------------------------------------------------------------------
| 28-12-2001T12:02:01.143 | Class_A | SubClass_B | 'some-info' | keyN | valueN |
---------------------------------------------------------------------
import pyspark.sql.functions as sql
import pandas as pd
#sc = SparkContext()
sqlContext = SQLContext(sc)

s = "28-12-2001T12:02:01.143 | Class_A | SubClass_B |some-info| {'key1': 'value1', 'key2':'value2', 'keyN':'valueN'}"
data = s.split('|')
data = data[:-1]+[eval(data[-1])]
p_df = pd.DataFrame(data).T
s_df = sqlContext.createDataFrame(p_df,schema=  ['created','class','sub_class','meta','intrests'])

s_df.select(s_df.columns[:-1]+[sql.explode(s_df.intrests).alias("key", "value")]).show()