Deployment 通过IP访问Apache Spark独立主机

Deployment 通过IP访问Apache Spark独立主机,deployment,apache-spark,Deployment,Apache Spark,我尝试通过Java连接到Apache Spark Master,但通过提供IP: 这是我创建SparkConf的代码: return new SparkConf() .setAppName(appName) .setMaster(master) .setJars(jars) .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer"); 我想以主人的身份提供s

我尝试通过Java连接到Apache Spark Master,但通过提供IP:

这是我创建
SparkConf
的代码:

return new SparkConf()
    .setAppName(appName)
    .setMaster(master)
    .setJars(jars)
    .set("spark.serializer",
            "org.apache.spark.serializer.KryoSerializer");
我想以主人的身份提供spark://IP:PORT. 令人遗憾的是,这似乎不起作用。它只适用于主机名(例如。spark://MyMacbook:7077),而不是使用IP(例如。spark://127.0.0.1:7077). 是否可以启动主机,使其也通过IP接受请求

我需要这样做,因为我正在通过Docker创建一个非常复杂的设置,并且希望从外部访问主程序(最初只是为了测试目的)

编辑: 我现在检查了主控制台,它显示:

dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://sparkMaster@192.168.99.100:7077/]] arriving at [akka.tcp://sparkMaster@192.168.99.100:7077] inbound addresses are [akka.tcp://sparkMaster@spark-master:7077]
所以我们可以看到Akka删除了消息,因为它被发送到IP(192.168.99.100)而不是主机名(spark master)。但是我想用IP。。。在我的情况下,提供
-h 192.168.99.100
作为主启动参数将不起作用(因为使用Docker和192.168.99.100是机器IP)

难道不能定义多个主机名或至少接受所有请求吗

编辑: 这个问题仍然没有解决,但我发现了另一个问题。当我尝试启动Spark Standalone Master并将其绑定到公共IP(在docker 192.168.99.100的情况下)时,会出现以下错误:

Exception in thread "main" java.net.BindException: Failed to bind to: /192.168.99.100:7093: Service 'sparkMaster' failed after 16 retries!
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393)
at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389)
at scala.util.Success$$anonfun$map$1.apply(Try.scala:206)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Success.map(Try.scala:206)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
这个问题与(Wayne Song的未回答问题)有关(/相同?):