Apache spark 使用纱线簇在Oozie运行Spark作业

Apache spark 使用纱线簇在Oozie运行Spark作业,apache-spark,yarn,oozie,hue,Apache Spark,Yarn,Oozie,Hue,我已经使用Oozie创建了一个Spark作业,它配置在纱线集群上运行。 Spark程序是用Scala编写的,是一个非常简单的程序,只需初始化SparkContext、println(“hello world”)并停止SparkContext 以下是workflow.xml文件: <workflow-app name="My_Workflow" xmlns="uri:oozie:workflow:0.5"> <start to="spark-0177"/>

我已经使用Oozie创建了一个Spark作业,它配置在纱线集群上运行。 Spark程序是用Scala编写的,是一个非常简单的程序,只需初始化SparkContext、println(“hello world”)并停止SparkContext

以下是workflow.xml文件:

<workflow-app name="My_Workflow" xmlns="uri:oozie:workflow:0.5">
    <start to="spark-0177"/>
    <kill name="Kill">
        <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <action name="spark-0177">
        <spark xmlns="uri:oozie:spark-action:0.1">
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <master>yarn-cluster</master>
            <mode>cluster</mode>
            <name>MySpark</name>
              <class>com.test1</class>
            <jar>/user/hue/oozie/workspaces/tl_test/lib/testOozie1.jar</jar>
              <spark-opts>--executor-cores 2  --driver-memory 5g --num-executors 2 --executor-memory 5g</spark-opts>
        </spark>
        <ok to="End"/>
        <error to="Kill"/>
    </action>
    <end name="End"/>
</workflow-app>

操作失败,错误消息[${wf:errorMessage(wf:lastErrorNode())}]
${jobTracker}
${nameNode}
纱线团
簇
迈斯帕克
com.test1
/user/hue/oozie/workspace/tl_test/lib/testOozie1.jar
--执行器核心2--驱动程序内存5g--num executors 2--执行器内存5g
但是,我得到了以下错误:

 Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.SparkMain], main() threw exception, Can not create a Path from an empty string
    java.lang.IllegalArgumentException: Can not create a Path from an empty string
        at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
        at org.apache.hadoop.fs.Path.<init>(Path.java:135)
        at org.apache.hadoop.fs.Path.<init>(Path.java:94)
        at org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:191)
        at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$3.apply(Client.scala:254)
        at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$3.apply(Client.scala:248)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:248)
        at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:384)
        at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:102)
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:623)
        at org.apache.spark.deploy.yarn.Client$.main(Client.scala:651)
        at org.apache.spark.deploy.yarn.Client.main(Client.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
        at org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:105)
        at org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:96)
        at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:46)
        at org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:40)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:228)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
        at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:370)
        at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:295)
        at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:181)
        at org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:224)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

    Oozie Launcher failed, finishing Hadoop job gracefully

    Oozie Launcher, uploading action data to HDFS sequence file: hdfs://MYRNDSVRVM350:8020/user/oozie-oozi/0000084-150828094553499-oozie-oozi-W/spark-156b--spark/action-data.seq

    Oozie Launcher ends
失败的Oozie启动器,Main()抛出异常的主类[org.apache.Oozie.action.hadoop.SparkMain]无法从空字符串创建路径
java.lang.IllegalArgumentException:无法从空字符串创建路径
位于org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
位于org.apache.hadoop.fs.Path(Path.java:135)
位于org.apache.hadoop.fs.Path(Path.java:94)
位于org.apache.spark.deploy.warn.Client.copyFileToRemote(Client.scala:191)
位于org.apache.spark.deploy.warn.Client$$anonfun$prepareLocalResources$3.apply(Client.scala:254)
位于org.apache.spark.deploy.warn.Client$$anonfun$prepareLocalResources$3.apply(Client.scala:248)
位于scala.collection.immutable.List.foreach(List.scala:318)
位于org.apache.spark.deploy.warn.Client.prepareLocalResources(Client.scala:248)
位于org.apache.spark.deploy.warn.Client.createContainerLaunchContext(Client.scala:384)
位于org.apache.spark.deploy.warn.Client.submitApplication(Client.scala:102)
位于org.apache.spark.deploy.warn.Client.run(Client.scala:623)
位于org.apache.spark.deploy.warn.Client$.main(Client.scala:651)
位于org.apache.spark.deploy.warn.Client.main(Client.scala)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:497)
位于org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
位于org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
位于org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
位于org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
位于org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
位于org.apache.oozie.action.hadoop.SparkMain.runSpark(SparkMain.java:105)
位于org.apache.oozie.action.hadoop.SparkMain.run(SparkMain.java:96)
位于org.apache.oozie.action.hadoop.launchemain.run(launchemain.java:46)
位于org.apache.oozie.action.hadoop.SparkMain.main(SparkMain.java:40)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:497)
在org.apache.oozie.action.hadoop.launchemapper.map(launchemapper.java:228)上
位于org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
位于org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
位于org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
位于org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:370)
位于org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runTask(LocalContainerLauncher.java:295)
位于org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.access$200(LocalContainerLauncher.java:181)
在org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler$1.run(LocalContainerLauncher.java:224)
位于java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
在java.util.concurrent.FutureTask.run(FutureTask.java:266)处
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
运行(Thread.java:745)
Oozie启动器失败,优雅地完成了Hadoop任务
Oozie启动器,将动作数据上载到HDFS序列文件:hdfs://MYRNDSVRVM350:8020/user/oozie-oozi/0000084-150828094553499-oozie-oozi-W/spark-156b--spark/action-data.seq
Oozie发射端
请帮帮我,我已经被困在这里了。
谢谢。

无法从空字符串创建路径
我遇到了完全相同的有线问题。事实证明,
中的多个空格会导致此非信息性错误。在
--executor cores 2--driver memory 5g
之间有两个空格

您的工作流定义在这里可能很有用。感谢您的快速回复。我可以知道你提到的“罐子路径”是火花罐吗?如果是,jar位于“/user/hue/oozie/workspace/temp_文件夹/lib/”中。但是,我仍然无法正确地理解它。它是包含应用程序的jar-您必须首先使用set或maven对其进行打包,并提供该jar的完整路径,而不仅仅是spark工作流的父文件夹:我已经将其设置为完整的hdfs路径,但我仍然有相同的错误。MYRNDSVRVM350。野牛。本地:8032hdfs://MYRNDSVRVM350.bison.local:8020 纱线集群MySpark com.test1hdfs://MYRNDSVRVM350.bison.local:8020/user/hue/oozie/workspaces/hue-oozie-1441794007.08/jar/testOozie1.jar——执行器内核2——驱动程序内存5g——num executors 2——执行器内存5g copyFileToRemote->Spark