Apache spark Spark broadcasted变量在Amazon EMR集群中运行时返回NullPointerException

Apache spark Spark broadcasted变量在Amazon EMR集群中运行时返回NullPointerException,apache-spark,amazon,broadcast,emr,Apache Spark,Amazon,Broadcast,Emr,我通过广播共享的变量在集群中为空 我的应用程序相当复杂,但我编写了一个小示例,当我在本地运行它时,它可以完美地工作,但在集群中失败: package com.gonzalopezzi.bigdata.bicing import org.apache.spark.broadcast.Broadcast import org.apache.spark.rdd.RDD import org.apache.spark.{SparkContext, SparkConf} object PruebaBr

我通过广播共享的变量在集群中为空

我的应用程序相当复杂,但我编写了一个小示例,当我在本地运行它时,它可以完美地工作,但在集群中失败:

package com.gonzalopezzi.bigdata.bicing

import org.apache.spark.broadcast.Broadcast
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkContext, SparkConf}

object PruebaBroadcast2 extends App {
  val conf = new SparkConf().setAppName("PruebaBroadcast2")
  val sc = new SparkContext(conf)

  val arr : Array[Int] = (6 to 9).toArray
  val broadcasted = sc.broadcast(arr)

  val rdd : RDD[Int] = sc.parallelize((1 to 4).toSeq, 2) // a small integer array [1, 2, 3, 4] is paralellized in two machines
  rdd.flatMap((a : Int) => List((a, broadcasted.value(0)))).reduceByKey(_+_).collect().foreach(println)  // NullPointerException in the flatmap. broadcasted is null

}
我不知道问题是编码错误还是配置问题

这是我得到的stacktrace:

15/07/07 20:55:13 INFO scheduler.DAGScheduler: Job 0 failed: collect at PruebaBroadcast2.scala:24, took 0.992297 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, ip-172-31-36-49.ec2.internal): java.lang.NullPointerException
    at com.gonzalopezzi.bigdata.bicing.PruebaBroadcast2$$anonfun$2.apply(PruebaBroadcast2.scala:24)
    at com.gonzalopezzi.bigdata.bicing.PruebaBroadcast2$$anonfun$2.apply(PruebaBroadcast2.scala:24)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
    at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:202)
    at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:56)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:64)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Command exiting with ret '1'
有人能帮我修一下吗? 至少,你能告诉我你在代码中看到了什么奇怪的东西吗? 如果您认为代码正常,请告诉我,因为这意味着问题出在集群的配置上


提前谢谢

我终于让它工作了

这样声明对象不起作用:

object MyObject extends App {
但是,如果使用主函数声明一个对象,它是有效的:

object MyObject {
    def main (args : Array[String]) {
    /* ... */
    }
}
因此,如果我将问题中的简短示例改写为:

object PruebaBroadcast2 {

  def main (args: Array[String]) {
    val conf = new SparkConf().setAppName("PruebaBroadcast2")
    val sc = new SparkContext(conf)

    val arr : Array[Int] = (6 to 9).toArray
    val broadcasted = sc.broadcast(arr)

    val rdd : RDD[Int] = sc.parallelize((1 to 4).toSeq, 2)

    rdd.flatMap((a : Int) => List((a, broadcasted.value(0)))).reduceByKey(_+_).collect().foreach(println)
  }
}
此问题似乎与此错误有关:
我也有类似的问题。问题是我有一个变量,在RDD映射函数中使用了它,得到了空值。这是我的原始代码:

object MyClass extends App {
    ...
    val prefix = "prefix" 
    val newRDD = inputRDD.map(s => prefix + s) // got null for prefix
    ...
}
我发现它可以在任何函数中工作,而不仅仅是main()

object MyClass extends App {
    ...
    val prefix = "prefix" 
    val newRDD = addPrefix(input, prefix)
    def addPrefix(input: RDD[String], prefix: String): RDD[String] = {
        inputRDD.map(s => prefix + s)
    }
}

错误状态为“已修复”,但我似乎仍然遇到了相同的问题(cdh 5.5.2)。错误状态为“已修复”,但修复只是打印了一条警告:“scala.App的子类可能无法正常工作。请使用main()方法。”这是一个技巧,但从美学角度来看,我更喜欢,您可以执行
对象扩展App{/*您的代码*/}