Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 无法连接_Apache Spark_Hadoop_Yarn - Fatal编程技术网

Apache spark 无法连接

Apache spark 无法连接,apache-spark,hadoop,yarn,Apache Spark,Hadoop,Yarn,在运行命令之后 spark-submit --class org.apache.spark.examples.SparkPi --proxy-user yarn --master yarn --deploy-mode cluster --driver-memory 4g --executor-memory 2g --executor-cores 1 --queue default ./examples/jars/spark-examples_2.11-2.3.0.jar 10000 我在输出中

在运行命令之后

spark-submit --class org.apache.spark.examples.SparkPi --proxy-user yarn --master yarn --deploy-mode cluster --driver-memory 4g --executor-memory 2g --executor-cores 1 --queue default ./examples/jars/spark-examples_2.11-2.3.0.jar 10000
我在输出中得到这个,它不断地重试。我哪里做错了?我是否缺少一些配置

我已经为Thread创建了一个新用户并运行该用户

WARN  Utils:66 - Your hostname, ukaleem-HP-EliteBook-850-G3 resolves to a loopback address: 127.0.1.1; using 10.XX.XX.XX instead (on interface enp0s31f6)
2018-06-14 16:50:41 WARN  Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address
Warning: Local jar /home/yarn/Documents/Scala-Examples/./examples/jars/spark-examples_2.11-2.3.0.jar does not exist, skipping.
2018-06-14 16:50:42 INFO  RMProxy:98 - Connecting to ResourceManager at /0.0.0.0:8032
2018-06-14 16:50:44 INFO  Client:871 - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
最后,它给出了一个例外

    Exception in thread "main" java.net.ConnectException: Call From ukaleem-HP-EliteBook-850-G3/127.0.1.1 to 0.0.0.0:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.GeneratedConstructorAccessor4.newInstance(Unknown Source)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1479)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy8.getClusterMetrics(Unknown Source)
    at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:206)
    at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy9.getClusterMetrics(Unknown Source)
    at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487)
    at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
    at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
    at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
    at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:59)
    at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:154)
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1146)
    at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
    at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:179)
    at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:177)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:177)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
    at org.apache.hadoop.ipc.Client.call(Client.java:1451)
    ... 28 more
2018-06-14 17:10:53 INFO  ShutdownHookManager:54 - Shutdown hook called
2018-06-14 17:10:53 INFO  ShutdownHookManager:54 - Deleting directory /tmp/spark-5bddb7f3-165f-451c-8ab4-bb7729f4237c
编辑:将配置文件添加到spark/conf目录后,我现在收到了这个错误

我添加的文件是

*core-site.xml

dfs.hosts

大师

奴隶

web-site.xml*

还有一些。据我所知,我只需要warn-site.xml来告诉spark纱线簇的位置。(ID、地址、主机名等)

一直以来,我一直在想,即使我们想要提交一份关于这些配置的作业,也需要进入/etc/Hadoop目录,而不是Spark/conf。那么安装Hadoop的目的是什么(除了通信之外)? 然后是这个问题。如果配置需要进入spark/conf,那么HADOOP_conf_DIR纱线_conf_DIR应该指向etc/HADOOP DIR或spark/conf

    INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2
18/06/19 11:04:50 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm2 after 1 fail over attempts. Trying to fail over after sleeping for 38176ms.
java.net.ConnectException: Call From ukaleem-HP-EliteBook-850-G3/127.0.1.1 to svc-hadoop-mgnt-pre-c2-01.jamba.net:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1479)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy13.getClusterMetrics(Unknown Source)
    at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:206)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.getClusterMetrics(Unknown Source)
    at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487)
    at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
    at org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
    at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
    at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:59)
    at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:154)
    at org.apache.spark.deploy.yarn.Client.run(Client.scala:1146)
    at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
    at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:179)
    at org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:177)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:177)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:712)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
    at org.apache.hadoop.ipc.Client.call(Client.java:1451)
    ... 29 more

如果您在本地计算机上运行此功能


更新您的
/etc/hosts
文件,根据主机名输入127.0.0.1。

假设您拥有一个完全分布式的纱线集群:您的spark submit脚本无法找到纱线资源管理器(基本上是纱线主节点)的配置。确保在您的环境中正确设置了
HADOOP\u CONF\u DIR
,并且它指向集群的配置。特别是您的
warn site.xml

编辑:更多细节 hadoop包附带服务器和客户端软件。服务器软件将是组成集群的许多运行的守护进程。如果您的工作站充当客户端(松散地使用该术语,与sparks
--deploy mode
不完全相关),则hadoop客户端软件必须知道集群中运行的服务器守护进程的网络位置。如果您的
warn site.xml
为空,那么它将从中提取其默认值(我相信这是硬编码的)

假设您的群集没有在HA模式下运行,并且是一种主要的默认配置,那么您的工作站的
warn site.xml
应该至少包含一个如下所示的条目:

<property>
  <name>yarn.resourcemanager.hostname</name>
  <value>rm-host.yourdomain.com</value>
</property>

warn.resourcemanager.hostname
rm-host.yourdomain.com
显然,将主机名替换为实际运行资源管理器的主机名。当然,与HDFS的任何spark交互都需要正确配置的
HDFS site.xml
,等等

某些群集管理软件将具有类似“生成客户端配置”(具体考虑我的cloudera体验)的功能,它将为您提供一个
.tar.gz
,并正确填充所有配置文件,以便从外部工作站访问群集

进一步建议:
如果您计划在这个集群中大量使用spark on Thread,spark建议确保您已配置了用于启动纱线节点管理器的。(请记住,此配置指令必须出现在运行Thread的节点管理器服务的
warn site.xml中,而不是在您的工作站上。

是的,我在本地计算机上运行它。为什么要将其更改为本地主机ip?它现在是127.0.1.1。编辑:如果您需要,它会给出相同的错误,刚刚将127.0.1.1更改为127.0.0.1。)如果您不打算使用现有的纱线簇,那么我建议使用
--master local[n]运行
,其中
n
是您工作站上的内核数。我知道,本地工作正常,但我有一个纱线集群,我想在其上运行作业。我已经设置了HADOOP_CONF_目录。我已经尝试了HADOOP_HOME/etc/HADOOP/CONF和HADOOP_HOME/etc/HADOOP。这是Thread-site.xml所在的位置,但其中没有任何内容,只有默认文件。我需要吗我的同事可以在Thread集群上运行作业。检查Hadoop连接拒绝页面,页面上显示确保异常中的目标地址不是0.0.0.0-这意味着您没有为该服务实际配置客户端的真实地址,相反,它正在获取服务器端属性t我想我需要在配置文件中给出一些地址,因为它正试图连接到0.0.0.0/0.0.0:8032。正确,您的
warn site.xml
应该包含您的warn集群的配置选项。也许您的同事可以共享他的
warn site.xml
文件你呢?更好的是,他的整个
HADOOP_HOME
?好吧!我在理解上有点困难。这些需要设置的conf文件是否应该在spark/conf/dir或etc/HADOOP/dir中。spark文档中没有提到任何内容!我找不到任何博客、教程或文档来链接spark和thread。我找到的只是设置路径像SPARK_HOME HADOOP_CONF_DIR in.profile使其可访问。我还更改了SPARK-evh.sh并将HADOOP_CONF_DIR和Thread_CONF_DIR更新为etc/HADOOP。其他教程是如何设置Thread集群,我在这里不需要。只想提交Thread集群上的作业。您需要从集群中的所有主机文件中删除127.0.1.1…Hostnames应该解析为外部地址,而不是本地地址。因此,
ResourceManager位于/0.0.0.0:8032
。这需要是warn-site.xml中的实际IP地址