Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Windows 7问题上的Spark Thread客户端_Apache Spark_Yarn_Cloudera Cdh - Fatal编程技术网

Apache spark Windows 7问题上的Spark Thread客户端

Apache spark Windows 7问题上的Spark Thread客户端,apache-spark,yarn,cloudera-cdh,Apache Spark,Yarn,Cloudera Cdh,我正在尝试执行 spark-submit --master yarn-client 在CDH 5.4.5的windows 7客户端计算机上。簇 我下载了spark 1.5。来自spark.apache.org的程序集。 然后从集群上运行的cloudera manager下载Thread配置,并将其路径写入客户端上的env变量Thread_CONF 应用程序工作正常,但客户端收到异常 15/10/16 10:54:59 WARN net.ScriptBasedMapping: Exception

我正在尝试执行

spark-submit --master yarn-client
在CDH 5.4.5的windows 7客户端计算机上。簇 我下载了spark 1.5。来自spark.apache.org的程序集。 然后从集群上运行的cloudera manager下载Thread配置,并将其路径写入客户端上的env变量Thread_CONF

应用程序工作正常,但客户端收到异常

15/10/16 10:54:59 WARN net.ScriptBasedMapping: Exception running /etc/hadoop/conf.cloudera.yarn/topology.py 10.20.52.104
java.io.IOException: Cannot run program "/etc/hadoop/conf.cloudera.yarn/topology.py" (in directory "C:\workspace\development\"): CreateProcess error=2, ═х єфрхЄё  эрщЄш єърчрээ√щ Їрщы
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:251)
        at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:188)
        at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
        at org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:101)
        at org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:81)
        at org.apache.spark.scheduler.cluster.YarnScheduler.getRackForHost(YarnScheduler.scala:38)
        at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$1.apply(TaskSchedulerImpl.scala:270)
        at org.apache.spark.scheduler.TaskSchedulerImpl$$anonfun$resourceOffers$1.apply(TaskSchedulerImpl.scala:262)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.TaskSchedulerImpl.resourceOffers(TaskSchedulerImpl.scala:262)
        at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint.makeOffers(CoarseGrainedSchedulerBackend.scala:167)
        at org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend$DriverEndpoint$$anonfun$receive$1.applyOrElse(CoarseGrainedSchedulerBackend.scala:106)
        at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$processMessage(AkkaRpcEnv.scala:178)
        at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$4.apply$mcV$sp(AkkaR
pcEnv.scala:127)
        at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$safelyCall(AkkaRpcEnv.scala:198)
        at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1.applyOrElse(AkkaRpcEnv.scala:126)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
        at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
        at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
        at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
        at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
        at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.aroundReceive(AkkaRpcEnv.scala:93)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
        at akka.dispatch.Mailbox.run(Mailbox.scala:220)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.IOException: CreateProcess error=2, ═х єфрхЄё  эрщЄш єърчрээ√щ Їрщы
        at java.lang.ProcessImpl.create(Native Method)
        at java.lang.ProcessImpl.<init>(ProcessImpl.java:386)
        at java.lang.ProcessImpl.start(ProcessImpl.java:137)
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
        ... 38 more
15/10/16 10:48:57 WARN net.ScriptBasedMapping: Exception running C:\packages\hadoop-client\yarn-conf\topology.py 10.20.52.105
java.io.IOException: Cannot run program "C:\packages\hadoop-client\yarn-conf\topology.py" (in directory "C:\workspace\development\"): CreateProcess error=193, %1 эх  ты хЄё  яЁшыюцхэшхь Win32
        at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:482)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:251)
        at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:188)
        at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
        at org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:101)
        at org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:81)
        at org.apache.spark.scheduler.cluster.YarnScheduler.getRackForHost(YarnScheduler.scala:38)
        at org.apache.spark.scheduler.TaskSetManager$$anonfun$org$apache$spark$scheduler$TaskSetManager$$addPendingTask$1.apply(TaskSetManager.scala:213)
        at org.apache.spark.scheduler.TaskSetManager$$anonfun$org$apache$spark$scheduler$TaskSetManager$$addPendingTask$1.apply(TaskSetManager.scala:192)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.TaskSetManager.org$apache$spark$scheduler$TaskSetManager$$addPendingTask(TaskSetManager.scala:192)
        at org.apache.spark.scheduler.TaskSetManager$$anonfun$1.apply$mcVI$sp(TaskSetManager.scala:161)
        at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)

据我所知,spark无法在windows上使用python.exe正确调用topology.py脚本,但如何修复它?

只需在site.xml中添加注释“net.topology.script.file.name”即可。

只需添加注释“net.topology.script.file.name”site.xml中的纱线参数。

当我试图使用带有Spark的iPython笔记本访问它时,在HortonWorks HDP2.4上遇到了与上面完全相同的问题。我用上面@mikhail kramer的建议解决了这个问题

在Windows客户端上,我必须注释掉我使用Ambari下载的core-site.xml文件中net.topology.script.name变量的值。注释掉的值现在如下所示:

<property>
  <name>net.topology.script.file.name</name>
  <value><!--/etc/hadoop/conf/topology_script.py--></value>
</property>

net.topology.script.file.name

我希望这可以帮助下一个将来有同样问题的人。

我在HortonWorks HDP2.4上遇到了与上面完全相同的问题,当时我正试图使用带Spark的iPython笔记本访问它。我用上面@mikhail kramer的建议解决了这个问题

在Windows客户端上,我必须注释掉我使用Ambari下载的core-site.xml文件中net.topology.script.name变量的值。注释掉的值现在如下所示:

<property>
  <name>net.topology.script.file.name</name>
  <value><!--/etc/hadoop/conf/topology_script.py--></value>
</property>

net.topology.script.file.name

我希望这可以帮助下一个在将来遇到同样问题的人。

只是一个观点,[apache spark]上的几乎每个开发人员都在使用某种形式的Unix和通常流行的linux发行版,如Ubuntu、RHEL。如果你加入Linux的行列,你找到帮助的几率将成倍增加。嗨,mehmet,这是真的。让Spark on Thread在windows上运行或多或少是我的科学兴趣所在。只是一个观点,[apache Spark]上的几乎每个开发人员都在使用某种形式的Unix和通常流行的linux发行版,如Ubuntu、RHEL。如果你加入Linux的行列,你找到帮助的几率将成倍增加。嗨,mehmet,这是真的。让Spark on Thread在windows上运行对我来说或多或少有点科学意义。要使用Spark,必须从cloudera manager客户端配置文件下载到客户端节点。在Thread site.xml配置中用“”注释指定的参数。但是,正如我所看到的,指定的问题不会影响spark应用程序的执行结果。我再次不明白您在说什么。我甚至没有得到结果,因为我的spark作业由于上述问题甚至没有提交。你想在windows计算机上提交spark应用程序,而不是在群集中提交,是吗?为此,您下载了spark distributive(与群集版本相同??),并尝试在windows上执行“spark submit”,对吗?如果没有关于集群拓扑和一些hadoop二进制文件(winutils.exe)的知识,Spark不会启动。您必须获得与集群版本相同的hadoop binary distributive for windows,请下载配置文件(从cloudera Manager下载cloudera的配置文件)。最后,设置环境变量‘HADOOP_HOME’和‘thread_CONF_DIR’。啊,啊,还有。。。编辑上面描述的warn site.xml。要使用spark,必须从cloudera manager客户端配置文件下载到客户端节点。在Thread site.xml配置中用“”注释指定的参数。但是,正如我所看到的,指定的问题不会影响spark应用程序的执行结果。我再次不明白您在说什么。我甚至没有得到结果,因为我的spark作业由于上述问题甚至没有提交。你想在windows计算机上提交spark应用程序,而不是在群集中提交,是吗?为此,您下载了spark distributive(与群集版本相同??),并尝试在windows上执行“spark submit”,对吗?如果没有关于集群拓扑和一些hadoop二进制文件(winutils.exe)的知识,Spark不会启动。您必须获得与集群版本相同的hadoop binary distributive for windows,请下载配置文件(从cloudera Manager下载cloudera的配置文件)。最后,设置环境变量‘HADOOP_HOME’和‘thread_CONF_DIR’。啊,啊,还有。。。编辑上文所述的warn site.xml。