Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/19.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
`_corrupt_record:string(nullable=true)`使用一个简单的Spark Scala应用程序_Scala_Apache Spark - Fatal编程技术网

`_corrupt_record:string(nullable=true)`使用一个简单的Spark Scala应用程序

`_corrupt_record:string(nullable=true)`使用一个简单的Spark Scala应用程序,scala,apache-spark,Scala,Apache Spark,我试图在Spark:The Financial Guide中运行一个简单/愚蠢的Spark Scala应用程序示例。它读取一个json文件并对其进行一些处理。但是运行它会报告\u corrupt\u record:string(nullable=true)。json文件每行有一个json对象。我想知道怎么了?谢谢 Scala代码: package com.databricks.example import org.apache.log4j.Logger import org.apache.sp

我试图在Spark:The Financial Guide中运行一个简单/愚蠢的Spark Scala应用程序示例。它读取一个json文件并对其进行一些处理。但是运行它会报告
\u corrupt\u record:string(nullable=true)
。json文件每行有一个json对象。我想知道怎么了?谢谢

Scala代码:

package com.databricks.example

import org.apache.log4j.Logger
import org.apache.spark.sql.SparkSession


object DFUtils extends Serializable {
  @transient lazy val logger = Logger.getLogger(getClass.getName)

  def pointlessUDF(raw: String) = {
    raw
  }

}

object DataFrameExample extends Serializable {
  def main(args: Array[String]): Unit = {
    val pathToDataFolder = args(0)

    val spark = SparkSession.builder().appName("Spark Example")
      .config("spark.sql.warehouse.dir", "/user/hive/warehouse")
      .getOrCreate()

    // udf registration
    spark.udf.register("myUDF", DFUtils.pointlessUDF(_:String):String)
    val df = spark.read.json(pathToDataFolder + "data.json")
    df.printSchema()
    // df.collect.foreach(println)
    // val x = df.select("value").foreach(x => println(x));
    // val manipulated = df.groupBy("grouping").sum().collect().foreach(x => println(x))    
    // val manipulated = df.groupBy(expr("myUDF(group)")).sum().collect().foreach(x => println(x))
  }
}
/tmp/test/data.json

{"grouping":"group_1", value:5}
{"grouping":"group_1", value:6}
{"grouping":"group_3", value:7}
{"grouping":"group_2", value:3}
{"grouping":"group_4", value:2}
{"grouping":"group_1", value:1}
{"grouping":"group_2", value:2}
{"grouping":"group_3", value:3}
$ cat build.sbt
name := "example"
organization := "com.databricks"
version := "0.1-SNAPSHOT"
scalaVersion := "2.11.8"
// scalaVersion := "2.13.1"

// Spark Information
// val sparkVersion = "2.2.0"
val sparkVersion = "2.4.5"

// allows us to include spark packages
resolvers += "bintray-spark-packages" at
  "https://dl.bintray.com/spark-packages/maven/"
resolvers += "Typesafe Simple Repository" at
  "http://repo.typesafe.com/typesafe/simple/maven-releases/"
resolvers += "MavenRepository" at
  "https://mvnrepository.com/"

libraryDependencies ++= Seq(
// spark core
  "org.apache.spark" %% "spark-core" % sparkVersion,
  "org.apache.spark" %% "spark-sql" % sparkVersion,
)
build.sbt

{"grouping":"group_1", value:5}
{"grouping":"group_1", value:6}
{"grouping":"group_3", value:7}
{"grouping":"group_2", value:3}
{"grouping":"group_4", value:2}
{"grouping":"group_1", value:1}
{"grouping":"group_2", value:2}
{"grouping":"group_3", value:3}
$ cat build.sbt
name := "example"
organization := "com.databricks"
version := "0.1-SNAPSHOT"
scalaVersion := "2.11.8"
// scalaVersion := "2.13.1"

// Spark Information
// val sparkVersion = "2.2.0"
val sparkVersion = "2.4.5"

// allows us to include spark packages
resolvers += "bintray-spark-packages" at
  "https://dl.bintray.com/spark-packages/maven/"
resolvers += "Typesafe Simple Repository" at
  "http://repo.typesafe.com/typesafe/simple/maven-releases/"
resolvers += "MavenRepository" at
  "https://mvnrepository.com/"

libraryDependencies ++= Seq(
// spark core
  "org.apache.spark" %% "spark-core" % sparkVersion,
  "org.apache.spark" %% "spark-sql" % sparkVersion,
)
使用SBT构建和打包:

$ sbt package
[info] Loading project definition from /tmp/test/bookexample/project
[info] Loading settings for project bookexample from build.sbt ...
[info] Set current project to example (in build file:/tmp/test/bookexample/)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[warn] insecure HTTP request is deprecated 'http://repo.typesafe.com/typesafe/simple/maven-releases/'; switch to HTTPS or opt-in as ("Typesafe Simple Repository" at "http://repo.typesafe.com/typesafe/simple/maven-releases/").withAllowInsecureProtocol(true)
[info] Compiling 1 Scala source to /tmp/test/bookexample/target/scala-2.11/classes ...
[success] Total time: 28 s, completed Mar 19, 2020, 8:35:50 AM
使用spark submit运行

$ ~/programs/spark/spark-2.4.5-bin-hadoop2.7/bin/spark-submit    --class com.databricks.example.DataFrameExample    --master local target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar  /tmp/test/
20/03/19 08:37:58 WARN Utils: Your hostname, ocean resolves to a loopback address: 127.0.1.1; using 192.168.122.1 instead (on interface virbr0)
20/03/19 08:37:58 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/03/19 08:37:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/03/19 08:38:00 INFO SparkContext: Running Spark version 2.4.5
20/03/19 08:38:00 INFO SparkContext: Submitted application: Spark Example
20/03/19 08:38:00 INFO SecurityManager: Changing view acls to: t
20/03/19 08:38:00 INFO SecurityManager: Changing modify acls to: t
20/03/19 08:38:00 INFO SecurityManager: Changing view acls groups to: 
20/03/19 08:38:00 INFO SecurityManager: Changing modify acls groups to: 
20/03/19 08:38:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(t); groups with view permissions: Set(); users  with modify permissions: Set(t); groups with modify permissions: Set()
20/03/19 08:38:01 INFO Utils: Successfully started service 'sparkDriver' on port 46163.
20/03/19 08:38:01 INFO SparkEnv: Registering MapOutputTracker
20/03/19 08:38:01 INFO SparkEnv: Registering BlockManagerMaster
20/03/19 08:38:01 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/03/19 08:38:01 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/03/19 08:38:01 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-42f9b92d-1420-4e04-aaf6-acb635a27907
20/03/19 08:38:01 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/03/19 08:38:02 INFO SparkEnv: Registering OutputCommitCoordinator
20/03/19 08:38:02 INFO Utils: Successfully started service 'SparkUI' on port 4040.
20/03/19 08:38:02 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.122.1:4040
20/03/19 08:38:02 INFO SparkContext: Added JAR file:/tmp/test/bookexample/target/scala-2.11/example_2.11-0.1-SNAPSHOT.jar at spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar with timestamp 1584621482787
20/03/19 08:38:03 INFO Executor: Starting executor ID driver on host localhost
20/03/19 08:38:03 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 35287.
20/03/19 08:38:03 INFO NettyBlockTransferService: Server created on 192.168.122.1:35287
20/03/19 08:38:03 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/03/19 08:38:03 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.122.1:35287 with 366.3 MB RAM, BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:03 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.122.1, 35287, None)
20/03/19 08:38:04 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('/user/hive/warehouse').
20/03/19 08:38:04 INFO SharedState: Warehouse path is '/user/hive/warehouse'.
20/03/19 08:38:05 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/03/19 08:38:10 INFO InMemoryFileIndex: It took 97 ms to list leaf files for 1 paths.
20/03/19 08:38:10 INFO InMemoryFileIndex: It took 3 ms to list leaf files for 1 paths.
20/03/19 08:38:12 INFO FileSourceStrategy: Pruning directories with: 
20/03/19 08:38:12 INFO FileSourceStrategy: Post-Scan Filters: 
20/03/19 08:38:12 INFO FileSourceStrategy: Output Data Schema: struct<value: string>
20/03/19 08:38:12 INFO FileSourceScanExec: Pushed Filters: 
20/03/19 08:38:14 INFO CodeGenerator: Code generated in 691.376591 ms
20/03/19 08:38:14 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 285.2 KB, free 366.0 MB)
20/03/19 08:38:14 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 23.3 KB, free 366.0 MB)
20/03/19 08:38:14 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.122.1:35287 (size: 23.3 KB, free: 366.3 MB)
20/03/19 08:38:14 INFO SparkContext: Created broadcast 0 from json at DataFrameExample.scala:31
20/03/19 08:38:14 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194560 bytes, open cost is considered as scanning 4194304 bytes.
20/03/19 08:38:14 INFO SparkContext: Starting job: json at DataFrameExample.scala:31
20/03/19 08:38:14 INFO DAGScheduler: Got job 0 (json at DataFrameExample.scala:31) with 1 output partitions
20/03/19 08:38:14 INFO DAGScheduler: Final stage: ResultStage 0 (json at DataFrameExample.scala:31)
20/03/19 08:38:14 INFO DAGScheduler: Parents of final stage: List()
20/03/19 08:38:14 INFO DAGScheduler: Missing parents: List()
20/03/19 08:38:15 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at json at DataFrameExample.scala:31), which has no missing parents
20/03/19 08:38:15 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 12.3 KB, free 366.0 MB)
20/03/19 08:38:15 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 7.4 KB, free 366.0 MB)
20/03/19 08:38:15 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.122.1:35287 (size: 7.4 KB, free: 366.3 MB)
20/03/19 08:38:15 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1163
20/03/19 08:38:15 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at json at DataFrameExample.scala:31) (first 15 tasks are for partitions Vector(0))
20/03/19 08:38:15 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/03/19 08:38:15 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 8242 bytes)
20/03/19 08:38:15 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
20/03/19 08:38:15 INFO Executor: Fetching spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar with timestamp 1584621482787
20/03/19 08:38:15 INFO TransportClientFactory: Successfully created connection to /192.168.122.1:46163 after 145 ms (0 ms spent in bootstraps)
20/03/19 08:38:15 INFO Utils: Fetching spark://192.168.122.1:46163/jars/example_2.11-0.1-SNAPSHOT.jar to /tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764/userFiles-4bb98e5a-d49a-4e2f-9553-4e0982f41f0e/fetchFileTemp5270349024712252124.tmp
20/03/19 08:38:16 INFO Executor: Adding file:/tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764/userFiles-4bb98e5a-d49a-4e2f-9553-4e0982f41f0e/example_2.11-0.1-SNAPSHOT.jar to class loader
20/03/19 08:38:16 INFO FileScanRDD: Reading File path: file:///tmp/test/data.json, range: 0-256, partition values: [empty row]
20/03/19 08:38:16 INFO CodeGenerator: Code generated in 88.903645 ms
20/03/19 08:38:16 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1893 bytes result sent to driver
20/03/19 08:38:16 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1198 ms on localhost (executor driver) (1/1)
20/03/19 08:38:16 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
20/03/19 08:38:16 INFO DAGScheduler: ResultStage 0 (json at DataFrameExample.scala:31) finished in 1.639 s
20/03/19 08:38:16 INFO DAGScheduler: Job 0 finished: json at DataFrameExample.scala:31, took 1.893394 s
root
 |-- _corrupt_record: string (nullable = true)

20/03/19 08:38:16 INFO SparkContext: Invoking stop() from shutdown hook
20/03/19 08:38:16 INFO SparkUI: Stopped Spark web UI at http://192.168.122.1:4040
20/03/19 08:38:16 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/03/19 08:38:17 INFO MemoryStore: MemoryStore cleared
20/03/19 08:38:17 INFO BlockManager: BlockManager stopped
20/03/19 08:38:17 INFO BlockManagerMaster: BlockManagerMaster stopped
20/03/19 08:38:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/03/19 08:38:17 INFO SparkContext: Successfully stopped SparkContext
20/03/19 08:38:17 INFO ShutdownHookManager: Shutdown hook called
20/03/19 08:38:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-983f7f15-6df2-4fec-90b0-2534f4b91764
20/03/19 08:38:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-7d1fcc2e-af36-4dc4-ab6b-49b901e890ba

代码没有问题。问题在于你的数据。它不是
json
格式。如果您要检查数据中
列的
值周围是否缺少双引号(“),那么它将给出
\u损坏\u记录:string

更改数据如下,并运行相同的代码:

{"grouping":"group_1", "value":5}
{"grouping":"group_1", "value":6}
{"grouping":"group_3", "value":7}
{"grouping":"group_2", "value":3}
{"grouping":"group_4", "value":2}
{"grouping":"group_1", "value":1}
{"grouping":"group_2", "value":2}
{"grouping":"group_3", "value":3}

df = spark.read.json("/spath/files/1.json")
df.show()
+--------+-----+                                                                
|grouping|value|
+--------+-----+
| group_1|    5|
| group_1|    6|
| group_3|    7|
| group_2|    3|
| group_4|    2|
| group_1|    1|
| group_2|    2|
| group_3|    3|
+--------+-----+

正如本线程中的其他人所指出的,问题在于您的输入不是有效的JSON。但是Spark使用的库以及Spark本身的扩展都支持以下情况:

val df = spark
  .read
  .option("allowUnquotedFieldNames", "true")
  .json(pathToDataFolder + "data.json")

我已经很长时间没有使用Spark了,但是可能
{“grouping”:“group_1”,value:5}
问题是
value
应该在qoutes(
)中,就像这样:
{“grouping”:“group_1”,“value”:5}
才是有效的JSON。谢谢。我从中复制了json文件,不知道它不正确。我还将书中最初的Scala代码添加到我的帖子中。运行它时也会出错。我想知道你是否也能让它工作?再次感谢。能否提供原始代码中出现的错误。问题在于原始代码中的
groupBy(expr(“myUDF(group)”)
expr
未定义,
myUDF
的注册函数不执行任何操作,并且非常愚蠢。如果我将其替换为
groupBy(“分组”)
,它就可以工作了。我不知道如何处理
expr
myUDF
。谢谢。你是说data.json文件很好吗?不。它肯定不是一个有效的json,但是内置的工具或足够宽容来处理这种格式错误的数据。