使用json字符串值和模式创建pyspark数据框架

使用json字符串值和模式创建pyspark数据框架,json,dataframe,pyspark,Json,Dataframe,Pyspark,我试图手动创建一些虚拟pyspark数据帧 我做了以下工作: from pyspark.sql.types import StructType,StructField, StringType, IntegerType data2 = [('{"Time":"2020-08-01T08:14:20.650Z","version":null}') ] schema = StructType([ \ Stru

我试图手动创建一些虚拟pyspark数据帧

我做了以下工作:

from pyspark.sql.types import StructType,StructField, StringType, IntegerType
data2 = [('{"Time":"2020-08-01T08:14:20.650Z","version":null}')
            ]

schema = StructType([ \
    StructField("raw_json",StringType(),True)
  ])

df = spark.createDataFrame(data=data2,schema=schema)
df.printSchema()
df.show(truncate=False)
但我得到了一个错误:

TypeError: StructType can not accept object '[{"Time:"2020-08-01T08:14:20.650Z","version":null}]' in type <class 'str'>

这个错误是因为你的大括号<代码>数据2应该有列表列表-因此用方括号替换内括号:

data2 = [['{"applicationTimeStamp":"2020-08-01T08:14:20.650Z","version":null}']]

schema = StructType([StructField("raw_json",StringType(),True)])
df = spark.createDataFrame(data=data2,schema=schema)

df.show(truncate=False)
+------------------------------------------------------------------+            
|raw_json                                                          |
+------------------------------------------------------------------+
|{"applicationTimeStamp":"2020-08-01T08:14:20.650Z","version":null}|
+------------------------------------------------------------------+

如果将data2指定为元组列表,也可以通过在括号内添加尾随逗号来指定它是元组

from pyspark.sql.types import *

# Note the trailing comma inside the parentheses
data2 = [('{"applicationTimeStamp":"2020-08-01T08:14:20.650Z","version":null}',)]

schema = StructType([
    StructField("raw_json",StringType(),True)
])

df = spark.createDataFrame(data=data2,schema=schema)
df.show(truncate=False)
+------------------------------------------------------------------+
|raw_json                                                          |
+------------------------------------------------------------------+
|{"applicationTimeStamp":"2020-08-01T08:14:20.650Z","version":null}|
+------------------------------------------------------------------+
试试这个:

import json

rdd = sc.parallelize(data2).map(lambda x: [json.loads(x)]).toDF(schema=['raw_json'])
import json

rdd = sc.parallelize(data2).map(lambda x: [json.loads(x)]).toDF(schema=['raw_json'])