Python Pypark中未加载Elephas:没有名为Elephas.spark_model的模块

Python Pypark中未加载Elephas:没有名为Elephas.spark_model的模块,python,apache-spark,pyspark,keras,distributed-computing,Python,Apache Spark,Pyspark,Keras,Distributed Computing,我正试图在集群上分发Keras培训,并使用Elephas来实现这一点。但是,当从Elephas()的文档运行基本示例时: 我得到以下错误: ImportError: No module named elephas.spark_model ```Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe. : org.apache.spark.S

我正试图在集群上分发Keras培训,并使用Elephas来实现这一点。但是,当从Elephas()的文档运行基本示例时:

我得到以下错误:

 ImportError: No module named elephas.spark_model



```Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 5.0 failed 4 times, most recent failure: Lost task 1.3 in stage 5.0 (TID 58, xxxx, executor 8): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/xx/xx/hadoop/yarn/local/usercache/xx/appcache/application_151xxx857247_19188/container_1512xxx247_19188_01_000009/pyspark.zip/pyspark/worker.py", line 163, in main
    func, profiler, deserializer, serializer = read_command(pickleSer, infile)
  File "/xx/xx/hadoop/yarn/local/usercache/xx/appcache/application_151xxx857247_19188/container_1512xxx247_19188_01_000009/pyspark.zip/pyspark/worker.py", line 54, in read_command
    command = serializer._read_with_length(file)
  File /yarn/local/usercache/xx/appcache/application_151xxx857247_19188/container_1512xxx247_19188_01_000009/pyspark.zip/pyspark/serializers.py", line 169, in _read_with_length
    return self.loads(obj)
  File "/yarn//local/usercache/xx/appcache/application_151xxx857247_19188/container_1512xxx247_19188_01_000009/pyspark.zip/pyspark/serializers.py", line 454, in loads
    return pickle.loads(obj)
ImportError: No module named elephas.spark_model

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)```
而且直接放在Jupyter笔记本上。两者都会导致相同的问题

有人能给我指点吗?这与elephas有关还是Pypark问题

编辑:我还上传了虚拟环境的zip文件,并在脚本中调用它:

virtualenv spark_venv --relocatable
cd spark_venv 
zip -qr ../spark_venv.zip *

PYSPARK_DRIVER_PYTHON=`which python` spark-submit --driver-memory 1G --py-files spark_venv.zip filename.py
然后在文件中,我执行以下操作:

sc.addPyFile("spark_venv.zip")

导入此keras后没有任何问题,但我仍然从上面得到
elephas
错误。

您应该将
elephas
库作为参数添加到
spark submit
命令中

引用官方指南:

对于Python,您可以使用
spark submit
--py files
参数添加要随应用程序分发的.py、.zip或.egg文件。如果您依赖多个Python文件,我们建议将它们打包到.zip或.egg中


我找到了一个解决方案,解决了如何将虚拟环境正确加载到主工作区和所有从工作区的问题:

virtualenv venv --relocatable
cd venv 
zip -qr ../venv.zip *

PYSPARK_PYTHON=./SP/bin/python spark-submit --master yarn --deploy-mode cluster --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=./SP/bin/python --driver-memory 4G --archives venv.zip#SP filename.py
有关GitHub问题的更多详细信息:

我压缩了安装elephas时使用的环境,并将其包含在
--py files venv.zip
中,之后我将其加载到
sc.addPyFile(“venv.zip”)中。这没什么用,我是不是漏了一步?我还发现了一个例子,告诉您在加载到
sc
后导入
venv,但是执行
import venv
不起作用,并告诉我
venv
不存在。有趣的是,其他一切似乎都加载得很好,当我包含
venv.zip
时,像Keras和其他依赖项这样的东西导入得很好,但是我仍然得到了上面描述的
elephas
错误。@ivan_bilan venv.zip是否包含必需的elephas文件?你能手动检查档案吗
sc.addPyFile("spark_venv.zip")
virtualenv venv --relocatable
cd venv 
zip -qr ../venv.zip *

PYSPARK_PYTHON=./SP/bin/python spark-submit --master yarn --deploy-mode cluster --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=./SP/bin/python --driver-memory 4G --archives venv.zip#SP filename.py