Python 2.7 Assing spark deep learning external jar在amazon EMR上使用python进行spark

Python 2.7 Assing spark deep learning external jar在amazon EMR上使用python进行spark,python-2.7,amazon-web-services,apache-spark,amazon-emr,apache-spark-2.1.1,Python 2.7,Amazon Web Services,Apache Spark,Amazon Emr,Apache Spark 2.1.1,我一直在尝试让spark深度学习库在我的EMR集群上工作,以便能够与Python 2.7并行读取图像。我一直在寻找这个问题已经有一段时间了,但我没有找到一个解决方案。我已尝试在配置中为sparksession设置不同的配置设置,在尝试创建sparksession对象时出现以下错误 ERROR SparkContext:91 - Error initializing SparkContext. org.apache.spark.SparkException: Yarn application ha

我一直在尝试让spark深度学习库在我的EMR集群上工作,以便能够与Python 2.7并行读取图像。我一直在寻找这个问题已经有一段时间了,但我没有找到一个解决方案。我已尝试在配置中为sparksession设置不同的配置设置,在尝试创建sparksession对象时出现以下错误

ERROR SparkContext:91 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
   at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
   at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
   at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
   at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
   at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
   at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
   at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
   at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
   at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
   at py4j.Gateway.invoke(Gateway.java:238)
   at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
   at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
   at py4j.GatewayConnection.run(GatewayConnection.java:214)
   at java.lang.Thread.run(Thread.java:748)
错误SparkContext:91-初始化SparkContext时出错。
org.apache.spark.SparkException:纱线应用程序已结束!它可能已被杀死或无法启动应用程序主机。
位于org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:89)
位于org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
位于org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
位于org.apache.spark.SparkContext(SparkContext.scala:500)
位于org.apache.spark.api.java.JavaSparkContext(JavaSparkContext.scala:58)
位于sun.reflect.NativeConstructorAccessorImpl.newInstance0(本机方法)
位于sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
在sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
位于java.lang.reflect.Constructor.newInstance(Constructor.java:423)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:238)
位于py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
在py4j.commands.ConstructorCommand.execute处(ConstructorCommand.java:69)
在py4j.GatewayConnection.run处(GatewayConnection.java:214)
运行(Thread.java:748)
以上是使用jupyter笔记本时的结果。 我尝试使用spark submit提交py文件,并添加了需要用作--jars、--driver类路径和--conf spark.executor.extraClassPath的值的jar,如所述。以下是我提交的代码以及产生的导入错误:

bin/spark-submit --jars /home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
--driver-class-path /home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
--conf spark.executor.extraClassPath=/home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar /
/home/hadoop/RunningCode6.py 

Traceback (most recent call last):
  File "/home/hadoop/RunningCode6.py", line 74, in <module>
  from sparkdl import KerasImageFileTransformer
ImportError: No module named sparkdl
bin/spark提交--jars/home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar/
--驱动程序类路径/home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar/
--conf spark.executor.extraClassPath=/home/hadoop/spark-deep-learning-0.2.0-spark2.1-s_2.11.jar/
/home/hadoop/RunningCode6.py
回溯(最近一次呼叫最后一次):
文件“/home/hadoop/RunningCode6.py”,第74行,在
从sparkdl导入KerasImageFileTransformer
ImportError:没有名为sparkdl的模块
该库在独立模式下运行良好,但在使用集群模式时,我不断遇到上述任一错误

我真的希望有人能帮我解决这个问题,因为我已经盯着它看了好几个星期了,我需要让它工作起来

谢谢