Apache spark spark submit“;服务';驱动程序&x27;无法在端口上绑定“;错误

Apache spark spark submit“;服务';驱动程序&x27;无法在端口上绑定“;错误,apache-spark,word-count,Apache Spark,Word Count,我使用以下命令运行wordcount的spark java示例:- time spark-submit --deploy-mode cluster --master spark://192.168.0.7:6066 --class org.apache.spark.examples.JavaWordCount /home/pi/Desktop/example/new/target/javaword.jar /books_50.txt 当我运行它时,输出如下:- Running Spark u

我使用以下命令运行wordcount的spark java示例:-

time spark-submit --deploy-mode cluster --master spark://192.168.0.7:6066 --class org.apache.spark.examples.JavaWordCount /home/pi/Desktop/example/new/target/javaword.jar /books_50.txt 
当我运行它时,输出如下:-

Running Spark using the REST application submission protocol.
16/07/18 03:55:41 INFO rest.RestSubmissionClient: Submitting a request to launch an application in spark://192.168.0.7:6066.
16/07/18 03:55:44 INFO rest.RestSubmissionClient: Submission successfully created as driver-20160718035543-0000. Polling submission state...
16/07/18 03:55:44 INFO rest.RestSubmissionClient: Submitting a request for the status of submission driver-20160718035543-0000 in spark://192.168.0.7:6066.
16/07/18 03:55:44 INFO rest.RestSubmissionClient: State of driver driver-20160718035543-0000 is now RUNNING.
16/07/18 03:55:44 INFO rest.RestSubmissionClient: Driver is running on worker worker-20160718041005-192.168.0.12-42405 at 192.168.0.12:42405.
16/07/18 03:55:44 INFO rest.RestSubmissionClient: Server responded with CreateSubmissionResponse:
{
  "action" : "CreateSubmissionResponse",
  "message" : "Driver successfully submitted as driver-20160718035543-0000",
  "serverSparkVersion" : "1.6.2",
  "submissionId" : "driver-20160718035543-0000",
  "success" : true
}
我检查了特定的worker(192.168.0.12)的日志,它说:-

Launch Command: "/usr/lib/jvm/jdk-8-oracle-arm32-vfp-hflt/jre/bin/java" "-cp" "/opt/spark/conf/:/opt/spark/lib/spark-assembly-1.6.2-hadoop2.6.0.jar:/opt/spark/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark/lib/datanucleus-core-3.2.10.jar:/opt/spark/lib/datanucleus-rdbms-3.2.9.jar" "-Xms1024M" "-Xmx1024M" "-Dspark.driver.supervise=false" "-Dspark.app.name=org.apache.spark.examples.JavaWordCount" "-Dspark.submit.deployMode=cluster" "-Dspark.jars=file:/home/pi/Desktop/example/new/target/javaword.jar" "-Dspark.master=spark://192.168.0.7:7077" "-Dspark.executor.memory=10M" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker@192.168.0.12:42405" "/opt/spark/work/driver-20160718035543-0000/javaword.jar" "org.apache.spark.examples.JavaWordCount" "/books_50.txt"
========================================

log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/07/18 04:10:58 INFO SecurityManager: Changing view acls to: pi
16/07/18 04:10:58 INFO SecurityManager: Changing modify acls to: pi
16/07/18 04:10:58 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(pi); users with modify permissions: Set(pi)
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
16/07/18 04:11:00 WARN Utils: Service 'Driver' could not bind on port 0. Attempting port 1.
Exception in thread "main" java.net.BindException: Cannot assign requested address: Service 'Driver' failed after 16 retries! Consider explicitly setting the appropriate port for the service 'Driver' (for example spark.ui.port for SparkUI) to an available port or increasing spark.port.maxRetries.
    at sun.nio.ch.Net.bind0(Native Method)
    at sun.nio.ch.Net.bind(Net.java:433)
    at sun.nio.ch.Net.bind(Net.java:425)
    at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
    at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125)
    at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:485)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1089)
    at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:430)
    at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:415)
    at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:903)
    at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:198)
    at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:348)
    at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
    at java.lang.Thread.run(Thread.java:745)
我的spark-env.sh文件(用于master)包含:-

export SPARK_MASTER_WEBUI_PORT="8080"
export SPARK_MASTER_IP="192.168.0.7"
export SPARK_EXECUTOR_MEMORY="10M"
export SPARK_WORKER_WEBUI_PORT="8080"
export SPARK_MASTER_IP="192.168.0.7"
export SPARK_EXECUTOR_MEMORY="10M"
我的spark-env.sh文件(适用于worker)包含:-

export SPARK_MASTER_WEBUI_PORT="8080"
export SPARK_MASTER_IP="192.168.0.7"
export SPARK_EXECUTOR_MEMORY="10M"
export SPARK_WORKER_WEBUI_PORT="8080"
export SPARK_MASTER_IP="192.168.0.7"
export SPARK_EXECUTOR_MEMORY="10M"

请帮忙

这可能是


我尝试了相同的步骤,但能够运行作业。如果可能,请发布完整的spark-env.sh和spark默认值。

您需要在
/etc/hosts
文件中输入主机名。 比如:

127.0.0.1   localhost "hostname"

我在尝试运行shell时也遇到了同样的问题,通过设置SPARK_LOCAL_IP环境变量,我可以实现这一点。在运行shell时,可以从命令行分配:

SPARK\u LOCAL\u IP=127.0.0.1./bin/SPARK shell

要获得更持久的解决方案,请在spark根目录的conf目录中创建一个spark-env.sh文件。添加以下行:

SPARK\u LOCAL\u IP=127.0.0.1


使用
chmod+x./conf/spark env.sh
为脚本授予执行权限,这将在默认情况下设置此环境变量。

我正在使用Maven/SBT管理依赖项,spark core包含在jar文件中

您可以通过设置“SPARK.driver.bindAddress”(在Scala中)在运行时覆盖SPARK_LOCAL_IP:

我也有这个问题

原因(对我来说)是无法从本地系统访问本地系统的IP。 我知道这句话毫无意义,但请阅读以下内容

我的系统名(uname-s)显示我的系统名为“sparkmaster”。 在我的/etc/hosts文件中,我为sparkmaster系统分配了一个固定的IP地址“192.168.1.70”。sparknode01和sparknode02分别在…1.71和…1.72处有额外的固定IP地址

由于其他一些问题,我需要将所有网络适配器更改为DHCP。这意味着他们得到的地址是192.168.90.123。 DHCP地址与…1.70范围不在同一网络中,并且没有配置路由

当spark启动时,is似乎希望尝试连接到uname中命名的主机(在我的例子中是sparkmaster)。这是IP 192.168.1.70,但无法连接到该地址,因为该地址位于无法访问的网络中

我的解决方案是将一个以太网适配器改回固定的静态地址(即192.168.1.70),瞧,问题解决了

因此,问题似乎在于,当spark在“本地模式”下启动时,它会尝试连接到以系统名称命名的系统(而不是本地主机)。 我想如果你想设置一个集群(就像我做的那样),这是有意义的,但它可能会导致上面的消息混乱。
可能将系统的主机名放在/etc/hosts中的127.0.0.1条目上也可以解决此问题,但我没有尝试。

我通过修改从属文件解决了此问题。它的spark-2.4.0-bin-hadoop2.7/conf/slave
请检查您的配置。

我遇到了这个问题,这是因为在/etc/hosts中使用我的IP更改了真实IP。

这个问题仅与IP地址有关。日志文件中的错误消息不提供信息。 按照以下3个步骤进行检查:

  • 检查您的IP地址-可以使用ifconfig或IP命令进行检查。如果您的服务不是公共服务。IP地址为192.168就足够了。如果正在规划群集,则不能使用127.0.0.1

  • 检查您的环境变量SPARK\u MASTER\u HOST-检查变量名称或实际IP地址中是否存在拼写错误

    环境|格雷普火花_

  • 使用命令netstat检查您计划用于sparkMaster的端口是否可用。请勿使用1024以下的端口。例如:

    netstat-a | 9123


  • 在sparkmaster开始运行后,如果您无法从其他计算机查看webui,请使用iptables命令打开webui端口。

    在数据帧中使用如下所示


    val spark=SparkSession.builder.appName(“BinarizerExample”).master(“local[*]).config(“spark.driver.bindAddress”、“127.0.0.1”).getOrCreate()

    第一个选项:-

    以下步骤可能会有所帮助:

    Get your hostname by using "hostname" command.
    
     xxxxxx.ssssss  (e) base  ~  hostname
     xxxxxx.ssssss.net
    
    如果主机名不存在,请在/etc/hosts文件中输入以下内容:

    127.0.0.1      xxxxxx.ssssss.net
    
    第二选项:-

    您可以在spark.conf文件中设置spark.driver.bindAddress

    spark.driver.bindAddress=127.0.0.1
    

    谢谢

    谢谢你的建议。我尝试在主机文件中添加/删除/编辑该行,但它不会改变任何内容。我非常确定它与您的网络设置有关,而不是与spark设置有关。我得到了同样的错误,并且能够通过在hosts文件中添加一个条目来解决它。为了确保这一点,您需要将最后一个cmd中的“hostname”替换为您的主机名(在shell中键入$hostname cmd),并且不使用引号这是根本原因。您从哪里解决了这个问题?我在spark v2.0.0上遇到了完全相同的问题。您好,我找不到有关此特定问题的任何线索。因此,我开始为wordcount流式传输python示例。如果您找到了解决方法,请告诉我。在启动脚本之前,为
    SPARK\u LOCAL\u IP
    执行
    导出
    ,这对我来说很有效-可以快速测试这是否有效。同样的问题-主机名是“45131”,SPARK将其视为IP。LOL,只有数字的主机名可能会引起各种挑战。