Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 为什么lines.map不起作用,但lines.take.map在Spark中起作用?_Scala_Hadoop_Machine Learning_Apache Spark - Fatal编程技术网

Scala 为什么lines.map不起作用,但lines.take.map在Spark中起作用?

Scala 为什么lines.map不起作用,但lines.take.map在Spark中起作用?,scala,hadoop,machine-learning,apache-spark,Scala,Hadoop,Machine Learning,Apache Spark,我是Scala和Spark的新手 我正在和老师一起练习 但我在这段代码中遇到了一个问题: 60 val lines = sc.textFile(inputPath) 61 val points = lines.map(parsePoint _).cache() 62 val ITERATIONS = args(2).toInt 第61行不起作用。在我将其更改为此之后: 60 val lines = sc.textFile(inputPath) 61 val poi

我是Scala和Spark的新手

我正在和老师一起练习

但我在这段代码中遇到了一个问题:

60    val lines = sc.textFile(inputPath)
61    val points = lines.map(parsePoint _).cache()
62    val ITERATIONS = args(2).toInt
第61行不起作用。在我将其更改为此之后:

60    val lines = sc.textFile(inputPath)
61    val points = lines.take(149800).map(parsePoint _)  //149800 is the total number of lines
62    val ITERATIONS = args(2).toInt
sbt运行的错误消息为:

[error] (run-main) org.apache.spark.SparkException: Job failed: Task 0.0:1 failed more than 4 times
org.apache.spark.SparkException: Job failed: Task 0.0:1 failed more than 4 times
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:760)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:758)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:758)
at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:379)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$run(DAGScheduler.scala:441)
at org.apache.spark.scheduler.DAGScheduler$$anon$1.run(DAGScheduler.scala:149)
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
[error] {file:/var/sdb/home/tim.tan/workspace/spark/}default-d3d73f/compile:run: Nonzero exit code: 1
[error] Total time: 52 s, completed Dec 20, 2013 5:42:18 PM
任务节点的std错误为:

13/12/20 17:42:16 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
13/12/20 17:42:16 INFO executor.StandaloneExecutorBackend: Connecting to driver: akka://spark@SHXJ-H07-SDB06:38975/user/StandaloneScheduler
13/12/20 17:42:17 INFO executor.StandaloneExecutorBackend: Successfully registered with driver
13/12/20 17:42:17 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started
13/12/20 17:42:17 INFO spark.SparkEnv: Connecting to BlockManagerMaster: akka://spark@SHXJ-H07-SDB06:38975/user/BlockManagerMaster
13/12/20 17:42:17 INFO storage.MemoryStore: MemoryStore started with capacity 323.9 MB.
13/12/20 17:42:17 INFO storage.DiskStore: Created local directory at /tmp/spark-local-20131220174217-be8e
13/12/20 17:42:17 INFO network.ConnectionManager: Bound socket to port 52043 with id = ConnectionManagerId(TS-BH90,52043)
13/12/20 17:42:17 INFO storage.BlockManagerMaster: Trying to register BlockManager
13/12/20 17:42:17 INFO storage.BlockManagerMaster: Registered BlockManager
13/12/20 17:42:17 INFO spark.SparkEnv: Connecting to MapOutputTracker: akka://spark@SHXJ-H07-SDB06:38975/user/MapOutputTracker
13/12/20 17:42:17 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-1b1a6c0b-965e-4834-a3d3-554c95442041
13/12/20 17:42:17 INFO server.Server: jetty-7.x.y-SNAPSHOT
13/12/20 17:42:17 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:41811
13/12/20 17:42:18 ERROR executor.StandaloneExecutorBackend: Driver terminated or disconnected! Shutting down.
登录工作程序如下所示:

13/12/19 17:49:26 INFO worker.Worker: Asked to launch executor app-20131219174926-0001/2 for SparkHdfsLR
13/12/19 17:49:26 INFO worker.ExecutorRunner: Launch command: "java" "-cp" ":/var/bh/spark/conf:/var/bh/spark/assembly/target/scala-2.9.3/spark-assembly-0.8.0-incubating-hadoop1.0.3.jar:/var/bh/spark/core/target/scala-2.9.3/test-classes:/var/bh/spark/repl/target/scala-2.9.3/test-classes:/var/bh/spark/mllib/target/scala-2.9.3/test-classes:/var/bh/spark/bagel/target/scala-2.9.3/test-classes:/var/bh/spark/streaming/target/scala-2.9.3/test-classes" "-Djava.library.path=/var/bh/hadoop/lib/native/Linux-amd64-64/" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.StandaloneExecutorBackend" "akka://spark@SHXJ-H07-SDB06:56158/user/StandaloneScheduler" "2" "TS-BH87" "8"
13/12/19 17:49:30 INFO worker.Worker: Asked to kill executor app-20131219174926-0001/2
13/12/19 17:49:30 INFO worker.ExecutorRunner: Runner thread for executor app-20131219174926-0001/2 interrupted
13/12/19 17:49:30 INFO worker.ExecutorRunner: Killing process!
工作负载似乎未成功启动


我不知道为什么。有谁能给我一个建议吗?

我找到了它不起作用的原因

由于某些不良配置,spark只能在独立模式下工作。请更正配置,如果希望代码以分布式模式运行,则最后两个参数必须特定于函数SparkContext:

new SparkContext(master, jobName, [sparkHome], [jars])

如果最后两个参数不具体,scala脚本只能在独立模式下工作。

请指定
行的类型。
@senia“不工作”是什么意思?@AlexeyRomanov I更新错误日志。