使用scala连接到外部Spark群集的eclipse

使用scala连接到外部Spark群集的eclipse,eclipse,apache-spark,connect,master,Eclipse,Apache Spark,Connect,Master,我在ec2 amazon云集群上部署了CDH5.3,所有服务都运行良好。我将Spark与纱线集群模式结合使用,并通过集群中的命令行开发脚本。此群集具有所有可从外部连接的端口 我想在装有Windows 8操作系统的笔记本电脑上使用带有scala的IDE Eclipse连接到我的云集群,并从Eclipse发送在scala项目中开发的作业 我使用2.10.4版scala库容器创建了一个scala项目,并使用filezilla将从集群的namenode提取的外部JAR添加到我的笔记本电脑: /opt/c

我在ec2 amazon云集群上部署了CDH5.3,所有服务都运行良好。我将Spark与纱线集群模式结合使用,并通过集群中的命令行开发脚本。此群集具有所有可从外部连接的端口

我想在装有Windows 8操作系统的笔记本电脑上使用带有scala的IDE Eclipse连接到我的云集群,并从Eclipse发送在scala项目中开发的作业

我使用2.10.4版scala库容器创建了一个scala项目,并使用filezilla将从集群的namenode提取的外部JAR添加到我的笔记本电脑:

/opt/cloudera/parcels/CDH/lib/hadoop/lib/*.jar

/opt/cloudera/parcels/CDH/lib/spark/lib/*.jar

然后,使用来自ecplise的这段代码SimpleApp.scala,我尝试执行:

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object SimpleApp {
 def main(args: Array[String]) {

   //val urlOption2 = "yarn-client"
   val urlOption1 = "spark://ecX-XX-XX-XX-XX.eu-west-1.compute.amazonaws.com:7170"
   val conf = new SparkConf().setAppName("Simple Application")
   val sc = new SparkContext(urlOption2 , "test", conf)
 }
当我尝试选项1时,将主机设置为集群ec2的namenode的url,然后会产生以下错误:

5/02/27 10:36:31 WARN AppClient$ClientActor: Failed to connect to master
org.apache.spark.SparkException: Invalid master URL: spark://ecX-XX-XX-XX-XX.eu-west-1.compute.amazonaws.com:7170
    at org.apache.spark.deploy.master.Master$.toAkkaUrl(Master.scala:830)
    at org.apache.spark.deploy.client.AppClient$ClientActor$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:80)
    at org.apache.spark.deploy.client.AppClient$ClientActor$$anonfun$tryRegisterAllMasters$1.apply(AppClient.scala:78)
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
    at org.apache.spark.deploy.client.AppClient$ClientActor.tryRegisterAllMasters(AppClient.scala:78)
    at org.apache.spark.deploy.client.AppClient$ClientActor.registerWithMaster(AppClient.scala:86)
    at org.apache.spark.deploy.client.AppClient$ClientActor.preStart(AppClient.scala:68)
    at akka.actor.ActorCell.create(ActorCell.scala:562)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:425)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:447)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:262)
    at akka.dispatch.Mailbox.run(Mailbox.scala:218)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
当我尝试选项2时,无法设置要连接的主节点/名称节点的IP,然后生成以下结果:

15/02/27 10:50:34 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/02/27 10:50:36 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/02/27 10:50:38 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
我试着按照jamescenter在Eclipse-Developer博客文章中关于获取spark设置的第二节中apachesparkweb上的链接的说明进行操作:但是没有任何说明