Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 在mesos上运行spark示例时出现警告:找不到GrossGrainedScheduler_Apache Spark_Mesos - Fatal编程技术网

Apache spark 在mesos上运行spark示例时出现警告:找不到GrossGrainedScheduler

Apache spark 在mesos上运行spark示例时出现警告:找不到GrossGrainedScheduler,apache-spark,mesos,Apache Spark,Mesos,我是spark的新手,最近我在mesos上部署了我的第一个spark集群 当我使用python开发应用程序时,我尝试在集群上运行示例pi。结果显示成功,但我收到以下警告 16/10/18 17:28:54 WARN NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(1,Executor finished with state FINISHED)] in 1 attempts org.apache.spark.

我是spark的新手,最近我在mesos上部署了我的第一个spark集群

当我使用python开发应用程序时,我尝试在集群上运行示例pi。结果显示成功,但我收到以下警告

16/10/18 17:28:54 WARN NettyRpcEndpointRef: Error sending message [message = RemoveExecutor(1,Executor finished with state FINISHED)] in 1 attempts
org.apache.spark.SparkException: Exception thrown in awaitResult
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:77)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse(RpcTimeout.scala:75)
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
    at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
    at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
    at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:83)
    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:102)
    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
    at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.removeExecutor(CoarseGrainedSchedulerBackend.scala:412)
    at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.executorTerminated(MesosCoarseGrainedSchedulerBackend.scala:555)
    at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.statusUpdate(MesosCoarseGrainedSchedulerBackend.scala:495)
Caused by: org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.
    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:152)
    at org.apache.spark.rpc.netty.Dispatcher.postLocalMessage(Dispatcher.scala:127)
    at org.apache.spark.rpc.netty.NettyRpcEnv.ask(NettyRpcEnv.scala:225)
    at org.apache.spark.rpc.netty.NettyRpcEndpointRef.ask(NettyRpcEnv.scala:508)
    at org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
    ... 4 more
还有一名工人被杀

这就是我提交申请的方式

$SPARK_HOME/bin/spark-submit --master mesos://<MESOS_HOST>:<MESOS_PORT> $SPARK_HOME/examples/src/main/python/pi.py 1000
$SPARK\u HOME/bin/SPARK submit--master mesos://:$SPARK\u HOME/examples/src/main/python/pi.py 1000

谁能给我一些建议吗?提前谢谢

经过一些尝试,我发现当我只在spark上运行示例时,它运行得很好。在TaskSchedulerImpl开始时也有一些警告:初始作业未接受任何资源;检查您的集群UI以确保工作人员已注册并拥有足够的资源。最后,我发现了问题,这是spark 2.0.0的一个bug。当我升级到spark 2.0.1时,它消失了。经过一些尝试,我发现当我只在spark上运行示例时,它运行得很好。在TaskSchedulerImpl开始时也有一些警告:初始作业未接受任何资源;检查您的集群UI以确保工作人员已注册并拥有足够的资源。最后,我发现了问题,这是spark 2.0.0的一个bug。当我升级到spark 2.0.1时,它消失了