Pyspark-读取json文件并返回数据帧

Pyspark-读取json文件并返回数据帧,json,pyspark,Json,Pyspark,下面我将使用pyspark阅读JSON test.json { "Transactions": [ { "ST": { "ST01": { "type": "271"}, "ST02": {"type": "1001"}, "ST03": {&qu

下面我将使用pyspark阅读JSON

test.json
{
  "Transactions": [
    {
      "ST": {
        "ST01": { "type": "271"},
        "ST02": {"type": "1001"},
        "ST03": {"type": "005010X279A1"}
      }
    }
  ]
}
+++++++++++++++++++++++++++++++++++
from pyspark.sql.types import *
from pyspark.sql import SparkSession
from pyspark.sql import functions as F

spark = SparkSession.builder.appName("Spark - JSON read").master("local[*]") \
    .config("spark.driver.bindAddress", "localhost") \
    .getOrCreate()

ST = StructType([
        StructField("ST01", StructType([StructField("type", StringType())])),
        StructField("ST02", StructType([StructField("type", StringType())])),
        StructField("ST03", StructType([StructField("type", StringType())])),
])
ST1 = StructType([
        StructField("ST01", StringType()),
        StructField("ST02", StringType()),
        StructField("ST03", StringType()),
])

Json_schema = StructType()
Json_schema.add("ST", ST1)
# Json_schema.add("ST", ST)
Schema = StructType([StructField("Transactions", ArrayType(Json_schema))])
df1 = spark.read.option("multiline", "true").json("test.json", schema = Schema)
df1.select(F.explode("Transactions")).select("col.*").select("ST.*").show(truncate=False)

我想要的输出如下:type的值必须是column value

+-----+------+------------+
|ST01 |ST02  |ST03        |
+-----+------+------------+
|271  |1001  |005010X279A1|
+------------+------------+
但是使用ST或ST1模式

With ST --> each column is a struct field
+-----+------+--------------+
|ST01 |ST02  |ST03          |
+-----+------+--------------+
|[271]|[1001]|[005010X279A1]|
+-----+------+--------------+

With ST1 --> its a JSON value for ST01, ST02 and ST03 cols
+--------------+---------------+-----------------------+
|ST01          |ST02           |ST03                   |
+--------------+---------------+-----------------------+
|{"type":"271"}|{"type":"1001"}|{"type":"005010X279A1"}|
+--------------+---------------+-----------------------+
我可以使用ST01.*和别名,但作为输入的JSON是动态的,它可能包含也可能不包含所有三个标记


有什么想法吗?

因为您的JSON是动态的,可能不包含所有三个标记,所以一种“动态”方法是使用
for
循环和现有列。一旦有了列名,您就可以

df2=df1.select(F.explode(“事务”)).select(“列*”).select(“列*”)
#使用ST模式(结构类型)
对于df2.0列中的列:
df2=df2.withColumn(col,F.expr(F'{col}.type'))
#使用ST1模式(JSON字符串类型)
对于df2.0列中的列:
df2=df2.withColumn(col,F.get_json_对象(col,'$.type'))
结果:

+----+----+------------+
|ST01|ST02|ST03        |
+----+----+------------+
|271 |1001|005010X279A1|
+----+----+------------+