Spark提交失败与Hive

Spark提交失败与Hive,hive,apache-spark,Hive,Apache Spark,我正试图让一个用Scala编写的Spark 1.1.0程序正常工作,但我很难做到这一点。我有一个非常简单的配置单元查询: select json, score from data 当我从spark shell运行以下命令时,一切正常(我需要驱动程序类路径中的MYSQL_CONN,因为我正在使用配置单元和MYSQL元数据存储) 我得到了十行json,就像我想要的一样。然而,当我用spark submit运行这个程序时,我遇到了一个问题 bin/spark-submit --master $SPA

我正试图让一个用Scala编写的Spark 1.1.0程序正常工作,但我很难做到这一点。我有一个非常简单的配置单元查询:

select json, score from data
当我从spark shell运行以下命令时,一切正常(我需要驱动程序类路径中的MYSQL_CONN,因为我正在使用配置单元和MYSQL元数据存储)

我得到了十行json,就像我想要的一样。然而,当我用spark submit运行这个程序时,我遇到了一个问题

bin/spark-submit --master $SPARK_URL --class spark.Main --driver-class-path $MYSQL_CONN target/spark-testing-1.0-SNAPSHOT.jar
这是我的全部星火计划

package spark

import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.{SparkContext, SparkConf}

object Main {
  def main(args: Array[String]) {
    val sc = new SparkContext(new SparkConf().setAppName("Gathering Data"))
    val sqlContext = new HiveContext(sc)
    sqlContext.sql("select json from data").map(t => t.getString(0)).take(10).foreach(println)
  }
}
这是结果堆栈

14/12/01 21:30:04 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, match1hd17.dc1): java.lang.ClassNotFoundException: spark.Main$$anonfun$main$1
        java.net.URLClassLoader$1.run(URLClassLoader.java:200)
        java.security.AccessController.doPrivileged(Native Method)
        java.net.URLClassLoader.findClass(URLClassLoader.java:188)
        java.lang.ClassLoader.loadClass(ClassLoader.java:307)
        java.lang.ClassLoader.loadClass(ClassLoader.java:252)
        java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
        java.lang.Class.forName0(Native Method)
        java.lang.Class.forName(Class.java:247)
        org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59)
        java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575)
        java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1732)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.readObject(ObjectInputStream.java:351)
        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
        org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        java.lang.Thread.run(Thread.java:619)
14/12/01 21:30:10 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, match1hd12.dc1m): java.lang.ClassNotFoundException: spark.Main$$anonfun$main$1
        java.net.URLClassLoader$1.run(URLClassLoader.java:200)
        java.security.AccessController.doPrivileged(Native Method)
        java.net.URLClassLoader.findClass(URLClassLoader.java:188)
        java.lang.ClassLoader.loadClass(ClassLoader.java:307)
        java.lang.ClassLoader.loadClass(ClassLoader.java:252)
        java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
        java.lang.Class.forName0(Native Method)
        java.lang.Class.forName(Class.java:247)
        org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59)
        java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1575)
        java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1496)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1732)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1947)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1329)
        java.io.ObjectInputStream.readObject(ObjectInputStream.java:351)
        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
        org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        java.lang.Thread.run(Thread.java:619)
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
我已经花了几个小时在这上面了,我不知道为什么这只适用于spark shell。我查看了各个节点上的stderr输出,它们有相同的神秘错误消息。如果有人能解释为什么这只适用于spark shell而不适用于spark submit,那就太棒了

谢谢

更新:

我一直在玩,下面的程序运行良好

package spark

import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.{SparkContext, SparkConf}

object Main {
  def main(args: Array[String]) {
    val sc = new SparkContext(new SparkConf().setAppName("Gathering Data"))
    val sqlContext = new HiveContext(sc)
    sqlContext.sql("select json from data").take(10).map(t => t.getString(0)).foreach(println)
  }
}

显然,这对大量数据不起作用,但它表明问题似乎出在ScehmaRDD.map()函数中。

似乎spark上下文初始化有问题

请尝试以下代码:

val sparkConf = new SparkConf().setAppName("Gathering Data");
val sc = new SparkContext(sparkConf);

我曾经遇到过类似的错误,在spark shell中执行得很好,但不是spark submit。后来我发现spark上下文配置不正确。在错误消息中,我看到ClassNotFoundException,我想可能是编译错误,所以ClassNotFoundException。无论如何,我会在我的集群中尝试这段代码,并让您知道。什么是hive-site.xml和$MYSQL\u-CONN。我已经使用spark 1.5和hive 1.2进行了设置。当我尝试启动sparkshell时,我发现指定密钥的错误太长:max 764
val sparkConf = new SparkConf().setAppName("Gathering Data");
val sc = new SparkContext(sparkConf);