Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/jsf/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala TaskSchedulerImpl:初始作业未接受任何资源。(火花误差)_Scala_Apache Spark - Fatal编程技术网

Scala TaskSchedulerImpl:初始作业未接受任何资源。(火花误差)

Scala TaskSchedulerImpl:初始作业未接受任何资源。(火花误差),scala,apache-spark,Scala,Apache Spark,我正在尝试在我的独立模式集群上运行示例 package org.apache.spark.examples import scala.math.random import org.apache.spark._ /** Computes an approximation to pi */ object SparkPi { def main(args: Array[String]) { val conf = new SparkConf().setAppName("SparkPi")

我正在尝试在我的独立模式集群上运行示例

package org.apache.spark.examples
import scala.math.random
import org.apache.spark._

/** Computes an approximation to pi */
object SparkPi {
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("SparkPi")
      .setMaster("spark://192.168.17.129:7077")
      .set("spark.driver.allowMultipleContexts", "true")
    val spark = new SparkContext(conf)
    val slices = if (args.length > 0) args(0).toInt else 2
    val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
    val count = spark.parallelize(1 until n, slices).map { i =>
    val x = random * 2 - 1
    val y = random * 2 - 1
    if (x*x + y*y < 1) 1 else 0
    }.reduce(_ + _)
    println("Pi is roughly " + 4.0 * count / n)
    spark.stop()
    }
}
问题:我正在使用spark shell(Scala接口)运行此代码。当我尝试此代码时,我反复收到此错误:

15/02/09 06:39:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
注意:我可以在我的硕士WebUI中看到我的员工,也可以在运行应用程序部分看到一份新工作。但这个应用程序并没有尽头,我反复看到错误

有什么问题


谢谢

如果您想从spark shell运行此程序,请使用参数--master启动shellspark://192.168.17.129:7077 并输入以下代码:

import scala.math.random
import org.apache.spark._
val slices = 10
val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
val count = sc.parallelize(1 until n, slices).map { i =>
    val x = random * 2 - 1
    val y = random * 2 - 1
    if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / n)
导入scala.math.random
导入org.apache.spark_
瓦尔切片=10
val n=math.min(100000L*片,Int.MaxValue).toInt//避免溢出
val count=sc.parallelize(1到n个切片)。映射{i=>
val x=随机*2-1
val y=随机*2-1
如果(x*x+y*y<1)1,则为0
}.减少(u+u)
println(“Pi大致为”+4.0*计数/n)
否则,将代码编译成jar并使用spark submit运行它。但从代码中删除setMaster,并将其作为“master”参数添加到spark submit脚本中。还要从代码中删除allowMultipleContexts参数


您只需要一个spark上下文。

您不能在spark shell中运行此代码。您在spark shell中输入的确切代码是什么?我只需将此代码复制并传递到spark shell中,然后调用SparkPi.main(数组(“10”))
import scala.math.random
import org.apache.spark._
val slices = 10
val n = math.min(100000L * slices, Int.MaxValue).toInt // avoid overflow
val count = sc.parallelize(1 until n, slices).map { i =>
    val x = random * 2 - 1
    val y = random * 2 - 1
    if (x*x + y*y < 1) 1 else 0
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / n)