Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/73.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Mesos上的Spark:在单个节点上调度任务_Apache Spark_Scheduling_Mesos - Fatal编程技术网

Apache spark Mesos上的Spark:在单个节点上调度任务

Apache spark Mesos上的Spark:在单个节点上调度任务,apache-spark,scheduling,mesos,Apache Spark,Scheduling,Mesos,假设我运行一个Pypark外壳来对抗一个介观星团。我只想占用12个CPU核心。所以我是这样启动的: uu@r4:~$ pyspark --master mesos://e3.test:5050 --total-executor-cores 12 然后是通常的事情: Python 2.7.13 |Anaconda 2.5.0 (64-bit)| (default, Dec 20 2016, 23:09:15) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on

假设我运行一个Pypark外壳来对抗一个介观星团。我只想占用12个CPU核心。所以我是这样启动的:

uu@r4:~$ pyspark --master mesos://e3.test:5050 --total-executor-cores 12 
然后是通常的事情:

Python 2.7.13 |Anaconda 2.5.0 (64-bit)| (default, Dec 20 2016, 23:09:15) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/01/31 08:16:31 INFO SparkContext: Running Spark version 1.6.2
17/01/31 08:16:31 INFO SecurityManager: Changing view acls to: uu
17/01/31 08:16:31 INFO SecurityManager: Changing modify acls to: uu
17/01/31 08:16:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(uu); users with modify permissions: Set(uu)
17/01/31 08:16:31 INFO Utils: Successfully started service 'sparkDriver' on port 53336.
17/01/31 08:16:31 INFO Slf4jLogger: Slf4jLogger started
17/01/31 08:16:32 INFO Remoting: Starting remoting
17/01/31 08:16:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@r4.test:59860]
17/01/31 08:16:32 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 59860.
17/01/31 08:16:32 INFO SparkEnv: Registering MapOutputTracker
17/01/31 08:16:32 INFO SparkEnv: Registering BlockManagerMaster
17/01/31 08:16:32 INFO DiskBlockManager: Created local directory at /var/tmp/spark/blockmgr-6b16ff11-b0bc-4a71-82f5-c69a363c8c1a
17/01/31 08:16:32 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
17/01/31 08:16:32 INFO SparkEnv: Registering OutputCommitCoordinator
17/01/31 08:16:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/01/31 08:16:32 INFO SparkUI: Started SparkUI at http://r4.test:4040
I0131 08:16:32.582038 24965 sched.cpp:226] Version: 1.1.0
I0131 08:16:32.586931 24958 sched.cpp:330] New master detected at master@192.168.0.15:5050
I0131 08:16:32.587162 24958 sched.cpp:341] No credentials provided. Attempting to register without authentication
I0131 08:16:32.596922 24956 sched.cpp:743] Framework registered with 075ef8d0-de21-472d-8198-80805006b93d-0051
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: Registered as framework ID 075ef8d0-de21-472d-8198-80805006b93d-0051
17/01/31 08:16:32 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51135.
17/01/31 08:16:32 INFO NettyBlockTransferService: Server created on 51135
17/01/31 08:16:32 INFO BlockManagerMaster: Trying to register BlockManager
17/01/31 08:16:32 INFO BlockManagerMasterEndpoint: Registering block manager r4.test:51135 with 511.1 MB RAM, BlockManagerId(driver, r4.test, 51135)
17/01/31 08:16:32 INFO BlockManagerMaster: Registered BlockManager
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_RUNNING
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 1.6.2
      /_/

Using Python version 2.7.13 (default, Dec 20 2016 23:09:15)
SparkContext available as sc, HiveContext available as sqlContext.
但最终只登记了一名遗嘱执行人:

>>> 17/01/31 08:16:35 INFO CoarseMesosSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (r5.test:42965) with ID 023af0f2-fc60-4d9d-a3db-301ab34764c9-S3
17/01/31 08:16:35 INFO BlockManagerMasterEndpoint: Registering block manager r5.test:33239 with 511.1 MB RAM, BlockManagerId(023af0f2-fc60-4d9d-a3db-301ab34764c9-S3, r5.test, 33239)
这意味着整个Spark应用程序将在单个节点上运行。这不是我想要的计划(主要是由于数据位置的考虑)。我所期望的更像Spark standalone的设置方式:
——整个executor Core
或多或少均匀地分布在集群中

有什么办法可以做到这一点?提及执行器/芯数的其余选项似乎没有任何影响(仅与独立和纱线配置相关)

为什么Spark with Mesos会采用这种逐个填充节点的布局策略,而不是分配工作

中提到的UPD:Conf条目也不起作用:

pyspark --master mesos://e3.test:5050 --conf spark.executor.cores=2 --conf spark.cores.max=12
这就是问题所在。在较新版本中,有一个选项
spark.cores.max
限制每个执行器的核心数

version 1.6.2