Python Pyspark Structured流解析嵌套Json

Python Pyspark Structured流解析嵌套Json,python,json,apache-spark,pyspark,spark-structured-streaming,Python,Json,Apache Spark,Pyspark,Spark Structured Streaming,我的项目是,将json写入Kafka主题,然后从Kafka主题读取json,最后接收csv。一切都很好。但有些密钥是嵌套的json。如何用json解析列表 Json示例: {"a": "test", "b": "1234", "c": "temp", "d": [{"test1": "car", "test2": 345}, {"test3": "animal", "test4": 1}], "e": 50000} 你可以在下面看到我的代码 import pyspark from pyspar

我的项目是,将json写入Kafka主题,然后从Kafka主题读取json,最后接收csv。一切都很好。但有些密钥是嵌套的json。如何用json解析列表

Json示例:

{"a": "test", "b": "1234", "c": "temp", "d": [{"test1": "car", "test2": 345}, {"test3": "animal", "test4": 1}], "e": 50000}
你可以在下面看到我的代码

import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.types import *
import pyspark.sql.functions as func
spark = SparkSession.builder\
                    .config('spark.jars.packages', 'org.apache.spark:spark-sql-kafka-0-10_2.11:2.3.0') \
                    .appName('kafka_stream_test')\
                    .getOrCreate()
ordersSchema = StructType() \
        .add("a", StringType()) \
        .add("b", StringType()) \
        .add("c", StringType()) \
        .add("d", StringType())\
        .add("e", StringType())

df = spark \
    .readStream \
    .format("kafka") \
    .option("kafka.bootstrap.servers", "localhost:9092") \
    .option("subscribe", "test") \
    .load()\


df_query = df \
    .selectExpr("cast(value as string)") \
    .select(func.from_json(func.col("value").cast("string"),ordersSchema).alias("parsed"))\
    .select("parsed.a","parsed.b","parsed.c","parsed.d","parsed.e","parsed.f")\

df_s = df_query \
    .writeStream \
    .format("console") \
    .outputMode("append") \
    .trigger(processingTime = "1 seconds")\
    .start()


aa = df_query \
    .writeStream \
    .format("csv")\
    .trigger(processingTime = "5 seconds")\
    .option("path", "/var/kafka_stream_test_out/")\
    .option("checkpointLocation", "/var/kafka_stream_test_out/chk") \
    .start()


df.printSchema()
df_s.awaitTermination()
aa.awaitTermination()

谢谢大家!

列d的架构错误。它必须是ArrayType。请参阅等效的Scala代码,您可以将其转换为Python

    val schema = new StructType().add("a",StringType)
      .add("b",StringType)
      .add("c",StringType)
      .add("d",ArrayType(new StructType().add("test1",StringType).add("test2",StringType)))
      .add("e",StringType)

json在d的每一行上都有不同的列名。我假设这是一个输入错误,字段是test1和test2

谢谢。我解决了,但还有一个问题。问题是,CSV数据源不支持数组数据类型。