Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/url/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 错误执行者:阶段1.0(TID 1)中任务1.0中的异常java.net.NoRouteToHostException:没有到主机的路由_Scala_Apache Spark - Fatal编程技术网

Scala 错误执行者:阶段1.0(TID 1)中任务1.0中的异常java.net.NoRouteToHostException:没有到主机的路由

Scala 错误执行者:阶段1.0(TID 1)中任务1.0中的异常java.net.NoRouteToHostException:没有到主机的路由,scala,apache-spark,Scala,Apache Spark,每次出现此错误时,我都试图运行word count spark应用程序请帮助,下面是wordcount.scala文件,在sbt包之后,我运行了spark submit命令 package main import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import org.apache.spark.SparkConf object WordCount { def main(args: Ar

每次出现此错误时,我都试图运行word count spark应用程序请帮助,下面是
wordcount.scala
文件,在
sbt
包之后,我运行了
spark submit
命令

package main

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object WordCount {
  def main(args: Array[String]) {

    val conf = new SparkConf().setAppName("Word Count")
    val sc = new SparkContext(conf)

    val textfile = sc.textFile("file:///usr/local/spark/README.md")
    val tokenizeddata = textfile.flatMap(line => line.split(" "))
    val countprep = tokenizeddata.map(word => (word,1))
    val counts = countprep.reduceByKey((accumvalue,newvalue)=>(accumvalue+newvalue))
    val sortedcount = counts.sortBy(kvpair=>kvpair._2,false)
    sortedcount.saveAsTextFile("file:///usr/local/wordcount")
  }
}    
我运行了下一个命令

 bin/spark-submit --class "main.WordCount" --master "local[*]" "/home/hadoop/SparkApps/target/scala-2.10/word-count_2.10-1.0.jar"
Spark assembly是用Hive构建的,包括Datanucleus jars classpath Java热点(TM)64位服务器VM警告:

忽略选项MaxPermSize=128m;支持在8.0 15/11/28 07:38:51中被删除错误执行者:任务1.0中的异常在阶段1.0中 (TID 1)java.net.NoRouteToHostException:没有到主机的路由 位于java.net.PlainSocketImpl.socketConnect(本机方法) 位于java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) 位于java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) 位于java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) 位于java.net.socksocketimpl.connect(socksocketimpl.java:392) 位于java.net.Socket.connect(Socket.java:589) 位于sun.net.NetworkClient.doConnect(NetworkClient.java:175) 位于sun.net.www.http.HttpClient.openServer(HttpClient.java:432) 位于sun.net.www.http.HttpClient.openServer(HttpClient.java:527) http.HttpClient.(HttpClient.java:211) http.HttpClient.New(HttpClient.java:308) http.HttpClient.New(HttpClient.java:326) 位于sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) 位于sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) 位于sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) 位于sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) 位于org.apache.spark.util.Utils$.fetchFile(Utils.scala:375) 位于org.apache.spark.executor.executor$$anonfun$org$apache$spark$executor$executor$$updateDependencies$6.apply(executor.scala:325) 位于org.apache.spark.executor.executor$$anonfun$org$apache$spark$executor$executor$$updateDependencies$6.apply(executor.scala:323) 在scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply处(TraversableLike.scala:772) 位于scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) 位于scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) 位于scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) 位于scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) 位于scala.collection.mutable.HashMap.foreach(HashMap.scala:98) 位于scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) 位于org.apache.spark.executor.executor.org$apache$spark$executor$executor$$updateDependencies(executor.scala:323) 位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:158) 位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 运行(Thread.java:745)


也许您应该添加.setMaster(“local”)

我认为您需要在应用程序中设置spark home您可以共享spark shell的整个输出吗?您应该会看到
INFO-SparkContext:Added-JAR
消息。我想知道稳定版
1.0
是否会给Spark带来麻烦。