Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 从文件系统读取文本文件时,Spark仍尝试连接到HDFS_Scala_Apache Spark_Mesos_Dcos - Fatal编程技术网

Scala 从文件系统读取文本文件时,Spark仍尝试连接到HDFS

Scala 从文件系统读取文本文件时,Spark仍尝试连接到HDFS,scala,apache-spark,mesos,dcos,Scala,Apache Spark,Mesos,Dcos,我刚刚创建了一个DC/OS集群,正在尝试运行简单的Spark任务,从/mnt/mesos/sandbox读取数据 object SimpleApp { def main(args: Array[String]) { val conf = new SparkConf() .setAppName("Simple Application") println("STARTING JOB!") val sc = new SparkContext(conf)

我刚刚创建了一个DC/OS集群,正在尝试运行简单的Spark任务,从
/mnt/mesos/sandbox
读取数据

object SimpleApp {
  def main(args: Array[String]) {
    val conf = new SparkConf()
      .setAppName("Simple Application")

    println("STARTING JOB!")

    val sc = new SparkContext(conf)

    val rdd = sc.textFile("file:///mnt/mesos/sandbox/foo")

    println(rdd.count)

    println("ENDING JOB!")
  }
}
我正在使用

dcos spark run --submit-args='--conf spark.mesos.uris=https://dripit-spark.s3.amazonaws.com/foo --class SimpleApp https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' --verbose
不幸的是,任务一直失败,出现以下异常

I0701 18:47:35.782994 30997 logging.cpp:188] INFO level logging started!
I0701 18:47:35.783197 30997 fetcher.cpp:424] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"https:\/\/dripit-spark.s3.amazonaws.com\/foobar-assembly-1.0.jar"}},{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"https:\/\/dripit-spark.s3.amazonaws.com\/foo"}}],"sandbox_directory":"\/var\/lib\/mesos\/slave\/slaves\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2\/frameworks\/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002\/executors\/driver-20160701184530-0001\/runs\/67b94f34-a9d3-4662-bedc-8578381e9305"}
I0701 18:47:35.784752 30997 fetcher.cpp:379] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784791 30997 fetcher.cpp:250] Fetching directly into the sandbox directory
I0701 18:47:35.784818 30997 fetcher.cpp:187] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784835 30997 fetcher.cpp:134] Downloading resource from 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar'
W0701 18:47:36.057448 30997 fetcher.cpp:272] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar
I0701 18:47:36.057673 30997 fetcher.cpp:456] Fetched 'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar'
I0701 18:47:36.057696 30997 fetcher.cpp:379] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057714 30997 fetcher.cpp:250] Fetching directly into the sandbox directory
I0701 18:47:36.057741 30997 fetcher.cpp:187] Fetching URI 'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057770 30997 fetcher.cpp:134] Downloading resource from 'https://dripit-spark.s3.amazonaws.com/foo' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo'
W0701 18:47:36.114565 30997 fetcher.cpp:272] Copying instead of extracting resource from URI with 'extract' flag, because it does not seem to be an archive: https://dripit-spark.s3.amazonaws.com/foo
I0701 18:47:36.114600 30997 fetcher.cpp:456] Fetched 'https://dripit-spark.s3.amazonaws.com/foo' to '/var/lib/mesos/slave/slaves/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo'
I0701 18:47:36.307576 31006 exec.cpp:143] Version: 0.28.1
I0701 18:47:36.310127 31022 exec.cpp:217] Executor registered on slave c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-S2
16/07/01 18:47:37 INFO SparkContext: Running Spark version 1.6.1
16/07/01 18:47:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/01 18:47:37 WARN SparkConf: 
SPARK_JAVA_OPTS was detected (set to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)

16/07/01 18:47:37 WARN SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ' as a work-around.
16/07/01 18:47:37 WARN SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2 ' as a work-around.
16/07/01 18:47:37 INFO SecurityManager: Changing view acls to: root
16/07/01 18:47:37 INFO SecurityManager: Changing modify acls to: root
16/07/01 18:47:37 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/01 18:47:37 INFO Utils: Successfully started service 'sparkDriver' on port 47358.
16/07/01 18:47:38 INFO Slf4jLogger: Slf4jLogger started
16/07/01 18:47:38 INFO Remoting: Starting remoting
16/07/01 18:47:38 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@10.0.1.107:54467]
16/07/01 18:47:38 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 54467.
16/07/01 18:47:38 INFO SparkEnv: Registering MapOutputTracker
16/07/01 18:47:38 INFO SparkEnv: Registering BlockManagerMaster
16/07/01 18:47:38 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-96092a9a-3164-4d65-8c0b-df5403abb056
16/07/01 18:47:38 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/07/01 18:47:38 INFO SparkEnv: Registering OutputCommitCoordinator
16/07/01 18:47:38 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 18:47:38 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/07/01 18:47:38 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/07/01 18:47:38 INFO SparkUI: Started SparkUI at http://10.0.1.107:4040
16/07/01 18:47:38 INFO HttpFileServer: HTTP File server directory is /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75/httpd-69184304-7ffd-4420-b020-5f8a1bafecbd
16/07/01 18:47:38 INFO HttpServer: Starting HTTP Server
16/07/01 18:47:38 INFO Server: jetty-8.y.z-SNAPSHOT
16/07/01 18:47:38 INFO AbstractConnector: Started SocketConnector@0.0.0.0:49074
16/07/01 18:47:38 INFO Utils: Successfully started service 'HTTP file server' on port 49074.
16/07/01 18:47:38 INFO SparkContext: Added JAR file:/mnt/mesos/sandbox/foobar-assembly-1.0.jar at http://10.0.1.107:49074/jars/foobar-assembly-1.0.jar with timestamp 1467398858626
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO@log_env@716: Client environment:host.name=ip-10-0-1-107.eu-west-1.compute.internal
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO@log_env@723: Client environment:os.name=Linux
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO@log_env@724: Client environment:os.arch=4.1.7-coreos-r1
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO@log_env@725: Client environment:os.version=#2 SMP Thu Nov 5 02:10:23 UTC 2015
I0701 18:47:38.778355   103 sched.cpp:164] Version: 0.25.0
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO@log_env@733: Client environment:user.name=(null)
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO@log_env@741: Client environment:user.home=/root
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO@log_env@753: Client environment:user.dir=/opt/spark/dist
2016-07-01 18:47:38,778:6(0x7f74cafc9700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=master.mesos:2181 sessionTimeout=10000 watcher=0x7f74d587c600 sessionId=0 sessionPasswd=<null> context=0x7f7540003f70 flags=0
2016-07-01 18:47:38,786:6(0x7f74c6ec0700):ZOO_INFO@check_events@1703: initiated connection to server [10.0.7.83:2181]
2016-07-01 18:47:38,787:6(0x7f74c6ec0700):ZOO_INFO@check_events@1750: session establishment complete on server [10.0.7.83:2181], sessionId=0x155a57d07f60050, negotiated timeout=10000
I0701 18:47:38.788107    99 group.cpp:331] Group process (group(1)@10.0.1.107:35064) connected to ZooKeeper
I0701 18:47:38.788147    99 group.cpp:805] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
I0701 18:47:38.788162    99 group.cpp:403] Trying to create path '/mesos' in ZooKeeper
I0701 18:47:38.789402    99 detector.cpp:156] Detected a new leader: (id='1')
I0701 18:47:38.789512    99 group.cpp:674] Trying to get '/mesos/json.info_0000000001' in ZooKeeper
I0701 18:47:38.790228    99 detector.cpp:481] A new leading master (UPID=master@10.0.7.83:5050) is detected
I0701 18:47:38.790293    99 sched.cpp:262] New master detected at master@10.0.7.83:5050
I0701 18:47:38.790473    99 sched.cpp:272] No credentials provided. Attempting to register without authentication
I0701 18:47:38.792147    97 sched.cpp:641] Framework registered with c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001
16/07/01 18:47:38 INFO CoarseMesosSchedulerBackend: Registered as framework ID c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001
16/07/01 18:47:38 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 38752.
16/07/01 18:47:38 INFO NettyBlockTransferService: Server created on 38752
16/07/01 18:47:38 INFO BlockManagerMaster: Trying to register BlockManager
16/07/01 18:47:38 INFO BlockManagerMasterEndpoint: Registering block manager 10.0.1.107:38752 with 511.1 MB RAM, BlockManagerId(driver, 10.0.1.107, 38752)
16/07/01 18:47:38 INFO BlockManagerMaster: Registered BlockManager
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/07/01 18:47:39 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 117.2 KB, free 117.2 KB)
16/07/01 18:47:39 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.6 KB, free 129.8 KB)
16/07/01 18:47:39 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.0.1.107:38752 (size: 12.6 KB, free: 511.1 MB)
16/07/01 18:47:39 INFO SparkContext: Created broadcast 0 from textFile at SimpleApp.scala:13
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 4 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 2 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 1 is now TASK_RUNNING
16/07/01 18:47:39 INFO CoarseMesosSchedulerBackend: Mesos task 3 is now TASK_RUNNING
16/07/01 18:47:39 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn1.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
16/07/01 18:47:39 WARN DFSUtil: Namenode for hdfs remains unresolved for ID nn2.  Check your hdfs-site.xml file to ensure namenodes are configured properly.
Exception in thread "main" java.lang.IllegalArgumentException: java.net.UnknownHostException: namenode1.hdfs.mesos
    at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
    at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:240)
    at org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy(ConfiguredFailoverProxyProvider.java:124)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:74)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.<init>(RetryInvocationHandler.java:65)
    at org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)
    at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:152)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:579)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:524)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:167)
    at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:653)
    at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:427)
    at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:400)
    at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$33.apply(SparkContext.scala:1015)
    at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
    at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
    at scala.Option.map(Option.scala:145)
    at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
    at org.apache.spark.rdd.RDD.count(RDD.scala:1157)
    at SimpleApp$.main(SimpleApp.scala:15)
    at SimpleApp.main(SimpleApp.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:786)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:183)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:208)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:123)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: namenode1.hdfs.mesos
    ... 48 more
16/07/01 18:47:39 INFO SparkContext: Invoking stop() from shutdown hook
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/07/01 18:47:39 INFO ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/07/01 18:47:40 INFO SparkUI: Stopped Spark web UI at http://10.0.1.107:4040
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: Shutting down all executors
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: Asking each executor to shut down
I0701 18:47:40.051103   111 sched.cpp:1771] Asked to stop the driver
I0701 18:47:40.051283    96 sched.cpp:1040] Stopping framework 'c4bf7f81-1cf7-413a-b9be-8dc3b36137ee-0002-driver-20160701184530-0001'
16/07/01 18:47:40 INFO CoarseMesosSchedulerBackend: driver.run() returned with code DRIVER_STOPPED
16/07/01 18:47:40 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/07/01 18:47:40 INFO MemoryStore: MemoryStore cleared
16/07/01 18:47:40 INFO BlockManager: BlockManager stopped
16/07/01 18:47:40 INFO BlockManagerMaster: BlockManagerMaster stopped
16/07/01 18:47:40 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/07/01 18:47:40 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/07/01 18:47:40 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/07/01 18:47:40 INFO SparkContext: Successfully stopped SparkContext
16/07/01 18:47:40 INFO ShutdownHookManager: Shutdown hook called
16/07/01 18:47:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75/httpd-69184304-7ffd-4420-b020-5f8a1bafecbd
16/07/01 18:47:40 INFO ShutdownHookManager: Deleting directory /tmp/spark-37696e45-5e8b-4328-81e6-deec1f185d75
I0701 18:47:35.782994 30997日志记录。cpp:188]信息级别日志记录已开始!
I0701 18:47:35.783197 30997 fetcher.cpp:424]fetcher信息:{“缓存目录”:“\/tmp\/mesos\/fetch\/slaves\/c4bf7f81-1cf7-413a-b9be-8dc3b3613ee-S2”,“项”:[{“操作”:“绕过缓存”,“uri”:{“提取”:true,“值”:“https:\/\/dripit spark.s3.amazonaws.com\/foobar-assembly-1.0.jar”{“操作”:“绕过缓存”,“提取”;“真值”:“https:\/\/dripit spark.s3.amazonaws.com\/foo”《沙盒目录》:“\/var\/lib\/mesos\/slave\/slaves\/c4bf7f81-1cf7-413a-b9be-8DC3B3613EE-S2\/frameworks\/c4bf7f81-1cf7-413a-b9be-8DC3B3613EE-0002\/executors\/driver-20160701184530-0001\/runs\/67B944-a9d3-4662-BED389305”
I0701 18:47:35.784752 30997获取程序。cpp:379]获取URI'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784791 30997 fetcher.cpp:250]直接获取到沙盒目录
I0701 18:47:35.784818 30997获取程序。cpp:187]获取URI'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar'
I0701 18:47:35.784835 30997 fetcher.cpp:134]正在从下载资源'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar“至”/var/lib/mesos/slave/slave/c4bf7f81-1cf7-413a-b9be-8dc3b3613ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b3613ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar'
W0701 18:47:36.057448 30997 fetcher.cpp:272]复制而不是使用“extract”标志从URI提取资源,因为它似乎不是存档:https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar
I0701 18:47:36.057673 30997取数器。cpp:456]已取数'https://dripit-spark.s3.amazonaws.com/foobar-assembly-1.0.jar“至”/var/lib/mesos/slave/slave/c4bf7f81-1cf7-413a-b9be-8DC3B3613EE-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8DC3B3613EE-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foobar-assembly-1.0.jar”
I0701 18:47:36.057696 30997获取程序。cpp:379]获取URI'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057714 30997 fetcher.cpp:250]直接获取到沙盒目录
I0701 18:47:36.057741 30997获取程序。cpp:187]获取URI'https://dripit-spark.s3.amazonaws.com/foo'
I0701 18:47:36.057770 30997 fetcher.cpp:134]正在从下载资源'https://dripit-spark.s3.amazonaws.com/foo“至”/var/lib/mesos/slave/slave/c4bf7f81-1cf7-413a-b9be-8dc3b3613ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b3613ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo
W0701 18:47:36.114565 30997 fetcher.cpp:272]复制而不是使用“extract”标志从URI提取资源,因为它似乎不是存档:https://dripit-spark.s3.amazonaws.com/foo
I0701 18:47:36.114600 30997取数器。cpp:456]已取数'https://dripit-spark.s3.amazonaws.com/foo“至”/var/lib/mesos/slave/slave/c4bf7f81-1cf7-413a-b9be-8dc3b3613ee-S2/frameworks/c4bf7f81-1cf7-413a-b9be-8dc3b3613ee-0002/executors/driver-20160701184530-0001/runs/67b94f34-a9d3-4662-bedc-8578381e9305/foo
I0701 18:47:36.307576 31006 exec.cpp:143]版本:0.28.1
I0701 18:47:36.310127 31022 exec.cpp:217]从机c4bf7f81-1cf7-413a-b9be-8DC3B3613EE-S2上注册的执行器
16/07/01 18:47:37信息SparkContext:运行Spark版本1.6.1
16/07/01 18:47:37警告NativeCodeLoader:无法为您的平台加载本机hadoop库…在适用的情况下使用内置java类
16/07/01 18:47:37警告SparkConf:
检测到SPARK_JAVA_OPTS(设置为'-Dspark.mesos.executor.docker.image=mesosphere/SPARK:1.0.0-1.6.1-2')。
这在Spark 1.0+中已被弃用。
请改用:
-./spark使用conf/spark-defaults.conf提交以设置应用程序的默认值
-./spark submit with--driver java options为驱动程序设置-X选项
-spark.executor.extraJavaOptions为执行器设置-X选项
-SPARK_DAEMON_JAVA_选择为独立守护程序(主守护程序或工作守护程序)设置JAVA选项
16/07/01 18:47:37警告SparkConf:将'spark.executor.extraJavaOptions'设置为'-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2'作为解决方案。
16/07/01 18:47:37警告SparkConf:将'spark.driver.extraJavaOptions'设置为'-Dspark.mesos.executor.docker.image=mesosphere/spark:1.0.0-1.6.1-2'作为解决方案。
16/07/01 18:47:37信息安全管理器:将视图ACL更改为:根
16/07/01 18:47:37信息安全管理器:将修改ACL更改为:根
16/07/01 18:47:37信息安全管理器:安全管理器:禁用身份验证;禁用ui ACL;具有查看权限的用户:设置(根);具有修改权限的用户:设置(根)
16/07/01 18:47:37信息实用程序:已在端口47358上成功启动服务“sparkDriver”。
16/07/01 18:47:38信息Slf4jLogger:Slf4jLogger已启动
16/07/01 18:47:38信息远程处理:开始远程处理
16/07/01 18:47:38信息远程处理:远程处理已启动;正在侦听地址:[akka。tcp://sparkDriverActorSystem@10.0.1.107:54467]
16/07/01 18:47:38信息实用程序:已在端口54467上成功启动服务“sparkDriverActorSystem”。
16/07/01 18:47:38信息SparkEnv:正在注册MapOutputTracker
16/07/01 18:47:38信息SparkEnv:正在注册BlockManagerMaster
16/07/01 18:47:38信息DiskBlockManager:已在/tmp/blockmgr-96092a9a-3164-4d65-8c0b-DF5403AB056创建本地目录
16/07/01 18:47:38信息MemoryStore:MemoryStore以511.1 MB的容量启动
16/07/01 18:47:38信息SparkEnv:正在注册OutputCommitCoordinator
16/07/01 18:47:38信息服务器:jetty-8.y.z-SNAPSHOT
16/07/01 18:47:38信息摘要连接器:已启动SelectChannelConnector@0.0.0.0:4040
16/07/01 18:47:38信息实用程序:已在端口4040上成功启动服务“SparkUI”。
16/07/01 18:47:38信息