Scala Spark单机模式:连接到127.0.1.1:<;港口>;拒绝

Scala Spark单机模式:连接到127.0.1.1:<;港口>;拒绝,scala,mapreduce,apache-spark,Scala,Mapreduce,Apache Spark,我在独立模式下使用Spark 0.7.2,并使用以下驱动程序,使用7个辅助进程和1个不同的主进程处理约90GB(压缩:19GB)的日志数据: System.setProperty("spark.default.parallelism", "32") val sc = new SparkContext("spark://10.111.1.30:7077", "MRTest", System.getenv("SPARK_HOME"), Seq(System.getenv("NM_JAR_PATH")

我在独立模式下使用Spark 0.7.2,并使用以下驱动程序,使用7个辅助进程和1个不同的主进程处理约90GB(压缩:19GB)的日志数据:

System.setProperty("spark.default.parallelism", "32")
val sc = new SparkContext("spark://10.111.1.30:7077", "MRTest", System.getenv("SPARK_HOME"), Seq(System.getenv("NM_JAR_PATH")))
val logData = sc.textFile("hdfs://10.111.1.30:54310/logs/")
val dcxMap = logData.map(line => (line.split("\\|")(0),   
                                  line.split("\\|")(9)))
                                  .reduceByKey(_ + " || " + _)
dcxMap.saveAsTextFile("hdfs://10.111.1.30:54310/out")
第1阶段的所有
shuffleMaptask
完成后:

Stage 1 (reduceByKey at DcxMap.scala:31) finished in 111.312 s
它提交了第0阶段:

Submitting Stage 0 (MappedRDD[6] at saveAsTextFile at DcxMap.scala:38), which is now runnable
经过一段时间的序列化后,它会打印出来

spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host23
spark.MapOutputTracker - Size of output statuses for shuffle 0 is 2008 bytes
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host21
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host22
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host26
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host24
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host27
spark.MapOutputTrackerActor - Asked to send map output locations for shuffle 0 to host28
在此之后,再也不会发生任何事情,而且
top
表明工人们现在都处于空闲状态。 如果我查看工作机器上的日志,在每台机器上都会发生相同的事情:

13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:34288]
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:36040] 
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:50467]
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:60833]
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:49893]
13/06/21 07:32:25 INFO network.SendingConnection: Initiating connection to [host27/127.0.1.1:39907]
然后,对于这些“启动连接”尝试中的每个,它都会在每个工作进程中抛出相同的错误(以host27的日志为例,仅显示第一次出现的错误):

为什么会发生这种情况?工人们似乎可以很好地相互沟通,唯一的问题似乎发生在他们想向自己发送信息时;在上面的示例中,host27尝试向自身发送6条消息,但失败了6次。向其他工人发送消息效果良好。 有人有主意吗

编辑:可能与使用127.0的spark有关。1.1而不是127.0。0.1?
/etc/hosts
如下所示:

127.0.0.1       localhost
127.0.1.1       host27.<ourdomain>  host27
127.0.0.1本地主机
127.0.1.1主机27。旅馆27

我发现问题与问题有关。 然而,对我来说,在工人身上设置SPARK_LOCAL_IP并没有解决问题。 我不得不将
/etc/hosts
更改为:

127.0.0.1       localhost
现在它运行平稳

127.0.0.1       localhost