Apache spark 如何在spark jobserver中运行sqlContext

Apache spark 如何在spark jobserver中运行sqlContext,apache-spark,spark-jobserver,Apache Spark,Spark Jobserver,我正在尝试在spark jobserver中本地执行作业。我的应用程序具有以下依赖项: name := "spark-test" version := "1.0" scalaVersion := "2.10.6" resolvers += Resolver.jcenterRepo libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.1" libraryDependencies += "spark.job

我正在尝试在spark jobserver中本地执行作业。我的应用程序具有以下依赖项:

name := "spark-test"

version := "1.0"

scalaVersion := "2.10.6"

resolvers += Resolver.jcenterRepo

libraryDependencies += "org.apache.spark"  %%  "spark-core"  %  "1.6.1"
libraryDependencies += "spark.jobserver"  %%  "job-server-api" % "0.6.2" % "provided"
libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.6.2"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.6.2"
libraryDependencies += "com.holdenkarau" % "spark-testing-base_2.10" % "1.6.2_0.4.7" % "test"
我使用以下方法生成了应用程序包:

sbt assembly
在那之后,我提交了如下文件包:

curl --data-binary @spark-test-assembly-1.0.jar localhost:8090/jars/myApp
触发作业时,出现以下错误:

{
  "duration": "0.101 secs",
  "classPath": "jobs.TransformationJob",
  "startTime": "2017-02-17T13:01:55.549Z",
  "context": "42f857ba-jobs.TransformationJob",
  "result": {
    "message": "java.lang.Exception: Could not find resource path for Web UI: org/apache/spark/sql/execution/ui/static",
    "errorClass": "java.lang.RuntimeException",
    "stack": ["org.apache.spark.ui.JettyUtils$.createStaticHandler(JettyUtils.scala:180)", "org.apache.spark.ui.WebUI.addStaticHandler(WebUI.scala:117)", "org.apache.spark.sql.execution.ui.SQLTab.<init>(SQLTab.scala:34)", "org.apache.spark.sql.SQLContext$$anonfun$createListenerAndUI$1.apply(SQLContext.scala:1369)", "org.apache.spark.sql.SQLContext$$anonfun$createListenerAndUI$1.apply(SQLContext.scala:1369)", "scala.Option.foreach(Option.scala:236)", "org.apache.spark.sql.SQLContext$.createListenerAndUI(SQLContext.scala:1369)", "org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:77)", "jobs.TransformationJob$.runJob(TransformationJob.scala:64)", "jobs.TransformationJob$.runJob(TransformationJob.scala:14)", "spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:301)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)", "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)", "java.lang.Thread.run(Thread.java:745)"]
  },
  "status": "ERROR",
  "jobId": "a6bd6f23-cc82-44f3-8179-3b68168a2aa7"
}
我有一些问题:

1)我注意到,要在本地运行spark jobserver,我不需要安装spark。spark jobserver是否已配备spark embedded?

2)如何知道spark jobserver使用的spark版本是什么?那在哪里?

3)我使用的是spark sql的1.6.2版。我应该换还是留着它?

如果有人能回答我的问题,我将非常感激

  • 是,spark jobserver具有spark依赖项。您应该使用job server extras/reStart,而不是job server/reStart,它将帮助您获取与sql相关的依赖项
  • 查看project/Versions.scala
  • 我认为您不需要spark sql,因为如果您运行job server extras/reStart,它就包括在内了

  • 你现在如何运行spark jobserver?你好@noorul。我是这样运行spark jobserver的:job server/Restart答案对您有帮助吗?是的,我可以运行它。谢谢
    override def runJob(sparkCtx: SparkContext, config: Config): Any = {
        val sqlContext = new SQLContext(sparkCtx)
        ...
    }