如何在Spark Scala中使用根元素读取多行json?
这是一个示例JSON文件。一般来说,我想这样做,比如如果我有根标记,那么如何将JSON数据读入Dataframe并在控制台中打印如何在Spark Scala中使用根元素读取多行json?,scala,apache-spark,pyspark,apache-spark-sql,Scala,Apache Spark,Pyspark,Apache Spark Sql,这是一个示例JSON文件。一般来说,我想这样做,比如如果我有根标记,那么如何将JSON数据读入Dataframe并在控制台中打印 { "Crimes": [ { "ID": 11034701, "Case Number": "JA366925", "Date": "01/01/2001 11:00:00 AM", "Block": "016XX E 86TH PL",
{
"Crimes": [
{
"ID": 11034701,
"Case Number": "JA366925",
"Date": "01/01/2001 11:00:00 AM",
"Block": "016XX E 86TH PL",
"IUCR": "1153",
"Primary Type": "DECEPTIVE PRACTICE",
"Description": "FINANCIAL IDENTITY THEFT OVER $ 300",
"Location Description": "RESIDENCE",
"Arrest": false,
"Domestic": false,
"Beat": 412,
"District": 4,
"Ward": 8,
"Community Area": 45,
"FBI Code": "11",
"Year": 2001,
"Updated On": "08/05/2017 03:50:08 PM"
},
{
"ID": 11162428,
"Case Number": "JA529032",
"Date": "11/28/2017 09:43:00 PM",
"Block": "026XX S CALIFORNIA BLVD",
"IUCR": "5131",
"Primary Type": "OTHER OFFENSE",
"Description": "VIOLENT OFFENDER: ANNUAL REGISTRATION",
"Location Description": "JAIL / LOCK-UP FACILITY",
"Arrest": true,
"Domestic": false,
"Beat": 1034,
"District": 10,
"Ward": 12,
"Community Area": 30,
"FBI Code": "26",
"X Coordinate": 1158280,
"Y Coordinate": 1886310,
"Year": 2017,
"Updated On": "02/11/2018 03:54:58 PM",
"Latitude": 41.843778126,
"Longitude": -87.694637678,
"Location": "(41.843778126, -87.694637678)"
}, {
"ID": 4080525,
"Case Number": "HL425503",
"Date": "06/16/2005 09:40:00 PM",
"Block": "062XX N KIRKWOOD AVE",
"IUCR": "1365",
"Primary Type": "CRIMINAL TRESPASS",
"Description": "TO RESIDENCE",
"Location Description": "RESIDENCE",
"Arrest": false,
"Domestic": false,
"Beat": 1711,
"District": 17,
"Ward": 39,
"Community Area": 12,
"FBI Code": "26",
"X Coordinate": 1145575,
"Y Coordinate": 1941395,
"Year": 2005,
"Updated On": "02/28/2018 03:56:25 PM",
"Latitude": 41.99518667,
"Longitude": -87.739863972,
"Location": "(41.99518667, -87.739863972)"
}, {
"ID": 4080539,
"Case Number": "HL422433",
"Date": "06/15/2005 12:55:00 PM",
"Block": "042XX S ST LAWRENCE AVE",
"IUCR": "0460",
"Primary Type": "BATTERY",
"Description": "SIMPLE",
"Location Description": "SCHOOL, PUBLIC BUILDING",
"Arrest": false,
"Domestic": false,
"Beat": 213,
"District": 2,
"Ward": 4,
"Community Area": 38,
"FBI Code": "08B",
"X Coordinate": 1180964,
"Y Coordinate": 1877123,
"Year": 2005,
"Updated On": "02/28/2018 03:56:25 PM",
"Latitude": 41.818075262,
"Longitude": -87.611675899,
"Location": "(41.818075262, -87.611675899)"
}
]
}
我正在使用这个代码
val conf = new SparkConf().setAppName("demo").setMaster("local");
val sc = new SparkContext(conf);
val spark = SparkSession.builder().master("local").appName("ValidationFrameWork").getOrCreate()
val sqlContext = new SQLContext(sc)
sc.hadoopConfiguration.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")
sc.hadoopConfiguration.set("parquet.enable.summary-metadata", "false")
val jsonRDD = sc.wholeTextFiles("D:/FinalScripts/output/Crimes1.json").map(x=>x._2)
val namesJson = sqlContext.read.json(jsonRDD)
namesJson.printSchema
namesJson.registerTempTable("JSONdata")
val data=sqlContext.sql("select * from JSONdata")
data.show()
对于这段代码,我得到了一列作为犯罪,在一行中,所有的数据都来了。如何忽略根元素而只获取原始数据
我甚至如何将嵌套的JSON读入数据框并在控制台中打印。试试:
import org.apache.spark.sql.functions._
ds.select(explode($"Crimes") as "exploded").select("exploded.*")
其中,ds
是从JSON记录创建的数据集
请注意,如果您的数据很大,Spark需要将整个数据保存在内存中,然后再将其展平。是否要将“犯罪”
数组作为输出数据帧中的行?当然可以,先生。仅此代码对我来说运行良好。:)谢谢!