apachespark读取带有额外列的JSON

apachespark读取带有额外列的JSON,json,scala,apache-spark,Json,Scala,Apache Spark,我正在读取一个包含两列的配置单元表,id和jsonString。我可以轻松地将jsonString转换为调用Spark.read.json函数的Spark数据结构,但我还必须添加id列 val jsonStr1 = """{"fruits":[{"fruit":"banana"},{"fruid":"apple"},{"fruit":"pera"}],"bar":{"foo":"[\"daniel\",\"pedro\",\"thing\"]"},"daniel":"daniel data ra

我正在读取一个包含两列的配置单元表,
id
jsonString
。我可以轻松地将
jsonString
转换为调用
Spark.read.json
函数的Spark数据结构,但我还必须添加
id

val jsonStr1 = """{"fruits":[{"fruit":"banana"},{"fruid":"apple"},{"fruit":"pera"}],"bar":{"foo":"[\"daniel\",\"pedro\",\"thing\"]"},"daniel":"daniel data random","cars":["montana","bagulho"]}"""
val jsonStr2 = """{"fruits":[{"dt":"banana"},{"fruid":"apple"},{"fruit":"pera"}],"bar":{"foo":"[\"daniel\",\"pedro\",\"thing\"]"},"daniel":"daniel data random","cars":["montana","bagulho"]}"""
val jsonStr3 = """{"fruits":[{"a":"banana"},{"fruid":"apple"},{"fruit":"pera"}],"bar":{"foo":"[\"daniel\",\"pedro\",\"thing\"]"},"daniel":"daniel data random","cars":["montana","bagulho"]}"""


case class Foo(id: Integer, json: String)

val ds = Seq(new Foo(1,jsonStr1), new Foo(2,jsonStr2), new Foo(3,jsonStr3)).toDS
val jsonDF = spark.read.json(ds.select($"json").rdd.map(r => r.getAs[String](0)).toDS)

jsonDF.show()

jsonDF.show
+--------------------+------------------+------------------+--------------------+
|                 bar|              cars|            daniel|              fruits|
+--------------------+------------------+------------------+--------------------+
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[,,, banana], [,...|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[, banana,,], [,...|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,,,], [,,...|
+--------------------+------------------+------------------+--------------------+
我想从配置单元表中添加列
id
,如下所示:

+--------------------+------------------+------------------+--------------------+---------------
|                 bar|              cars|            daniel|              fruits|  id
+--------------------+------------------+------------------+--------------------+--------------
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[,,, banana], [,...|1
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[, banana,,], [,...|2
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,,,], [,,...|3
+--------------------+------------------+------------------+--------------------+
我不会使用正则表达式


我创建了一个udf,它将这两个字段作为参数,并使用适当的JSON库包含所需的
字段(id)
,然后返回一个新的JSON字符串。它像一个魅力,但我希望火花API提供了一个更好的方式来做到这一点。我正在使用Apache Spark 2.3.0。

一种方法是将
从_json
应用到具有相应模式的json字符串,如下所示:

import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import spark.implicits._

case class Foo(id: Int, json: String)

val df = Seq(Foo(1, jsonStr1), Foo(2, jsonStr2), Foo(3, jsonStr3)).toDF

val schema = StructType(Seq(
  StructField("bar", StructType(Seq(
    StructField("foo", StringType, true)
    )), true),
  StructField("cars", ArrayType(StringType, true), true),
  StructField("daniel", StringType, true),
  StructField("fruits", ArrayType(StructType(Seq(
    StructField("a", StringType, true),
    StructField("dt", StringType, true),
    StructField("fruid", StringType, true),
    StructField("fruit", StringType, true)
  )), true), true)
))

df.
  withColumn("json_col", from_json($"json", schema)).
  select($"id", $"json_col.*").
  show
// +---+--------------------+------------------+------------------+--------------------+
// | id|                 bar|              cars|            daniel|              fruits|
// +---+--------------------+------------------+------------------+--------------------+
// |  1|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[null,null,null,...|
// |  2|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[null,banana,nul...|
// |  3|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,null,nul...|
// +---+--------------------+------------------+------------------+--------------------+

一种方法是使用相应的模式将
from_json
应用于json字符串,如下所示:

import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import spark.implicits._

case class Foo(id: Int, json: String)

val df = Seq(Foo(1, jsonStr1), Foo(2, jsonStr2), Foo(3, jsonStr3)).toDF

val schema = StructType(Seq(
  StructField("bar", StructType(Seq(
    StructField("foo", StringType, true)
    )), true),
  StructField("cars", ArrayType(StringType, true), true),
  StructField("daniel", StringType, true),
  StructField("fruits", ArrayType(StructType(Seq(
    StructField("a", StringType, true),
    StructField("dt", StringType, true),
    StructField("fruid", StringType, true),
    StructField("fruit", StringType, true)
  )), true), true)
))

df.
  withColumn("json_col", from_json($"json", schema)).
  select($"id", $"json_col.*").
  show
// +---+--------------------+------------------+------------------+--------------------+
// | id|                 bar|              cars|            daniel|              fruits|
// +---+--------------------+------------------+------------------+--------------------+
// |  1|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[null,null,null,...|
// |  2|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[null,banana,nul...|
// |  3|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,null,nul...|
// +---+--------------------+------------------+------------------+--------------------+

我以前就已经从_json函数中知道了
,但在我的例子中,手动推断每个json的模式是“不可能的”。我在想Spark会有一个“惯用”的界面

这是我的最终解决方案:

ds.select($"id", from_json($"json", jsonDF.schema).alias("_json_path")).select($"_json_path.*", $"id").show

ds.select($"id", from_json($"json", jsonDF.schema).alias("_json_path")).select($"_json_path.*", $"id").show

+--------------------+------------------+------------------+--------------------+---+
|                 bar|              cars|            daniel|              fruits| id|
+--------------------+------------------+------------------+--------------------+---+
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[,,, banana], [,...|  1|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[, banana,,], [,...|  2|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,,,], [,,...|  3|
+--------------------+------------------+------------------+--------------------+---+

我以前就已经从_json
函数中知道了
,但在我的例子中,手动推断每个json的模式是“不可能的”。我在想Spark会有一个“惯用”的界面

这是我的最终解决方案:

ds.select($"id", from_json($"json", jsonDF.schema).alias("_json_path")).select($"_json_path.*", $"id").show

ds.select($"id", from_json($"json", jsonDF.schema).alias("_json_path")).select($"_json_path.*", $"id").show

+--------------------+------------------+------------------+--------------------+---+
|                 bar|              cars|            daniel|              fruits| id|
+--------------------+------------------+------------------+--------------------+---+
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[,,, banana], [,...|  1|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[, banana,,], [,...|  2|
|[["daniel","pedro...|[montana, bagulho]|daniel data random|[[banana,,,], [,,...|  3|
+--------------------+------------------+------------------+--------------------+---+

我已经知道这个解决方案,JSON文件非常庞大,手动创建模式也非常愚蠢。如果Spark像“Spark.read.json”一样自动推断模式,我可以使用from_json。我想在call spadk.read.json中推断模式并将其作为参数赋给from_json。但我不确定它是否会像听起来那样简单,除了覆盖serialization.ds.select($“id”,from_json($“json”,jsonDF.schema).alias(“_json_path”))之外。show@Mantovani,您当然可以从
jsonDF
中获得模式,而模式本身需要额外的转换才能生成。对于具有复杂JSON模式的大型数据集,可能最好使用一行JSON数据创建一个JSON文件,执行
spark.read.JSON
并为其模式应用
.schema
。我已经知道这个解决方案,JSON文件非常大,手动创建模式非常愚蠢。如果Spark像“Spark.read.json”一样自动推断模式,我可以使用from_json。我想在call spadk.read.json中推断模式并将其作为参数赋给from_json。但我不确定它是否会像听起来那样简单,除了覆盖serialization.ds.select($“id”,from_json($“json”,jsonDF.schema).alias(“_json_path”))之外。show@Mantovani,您当然可以从
jsonDF
中获得模式,而模式本身需要额外的转换才能生成。对于具有复杂JSON模式的大型数据集,最好使用一行JSON数据创建一个JSON文件,执行
spark.read.JSON
并为其模式应用
.schema