Python cPickle.PicklingError:无法序列化对象:NotImplementedError

Python cPickle.PicklingError:无法序列化对象:NotImplementedError,python,tensorflow,keras,pyspark,elephas,Python,Tensorflow,Keras,Pyspark,Elephas,未经修改运行Elephas示例时出错: (即使使用git版本pip安装,也会出现这个错误——没有缓存目录git+git://github.com/maxpumperla/elephas.git@(硕士) 我用过的例子: (我试图启用tf.compat.v1.enable_eager_execution(),但其他代码无法使用该设置) pyspark_1 | 19/10/25 10:23:03信息SparkContext:从NativeMethodAccessorImpl.java上的广播创建了广

未经修改运行Elephas示例时出错: (即使使用git版本pip安装,也会出现这个错误——没有缓存目录git+git://github.com/maxpumperla/elephas.git@(硕士)

我用过的例子:

(我试图启用tf.compat.v1.enable_eager_execution(),但其他代码无法使用该设置)

pyspark_1 | 19/10/25 10:23:03信息SparkContext:从NativeMethodAccessorImpl.java上的广播创建了广播12:0
pyspark| 1 |回溯(最近一次呼叫最后一次):
pyspark_1| File“/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/serializers.py”,第590行,转储
Pypark|u 1 |返回cloudpickle.dumps(obj,2)
pyspark_1| File“/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py”,第863行,转储
Pybark|u 1 | cp.dump(obj)
pyspark_1| File“/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py”,第260行,转储文件
pyspark|u 1 |返回Pickler.dump(自我,obj)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第224行,在转储文件中
pyspark|u 1 |自我保存(obj)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第286行,保存
pyspark_1 | f(self,obj)#使用显式self调用未绑定方法
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,第568行,在save_元组中
pyspark_1|保存(元素)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第286行,保存
pyspark_1 | f(self,obj)#使用显式self调用未绑定方法
pyspark_1|文件“/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py”,第406行,在save_函数中
pyspark_1 |自存函数_元组(obj)
pyspark_1|文件“/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py”,第549行,在save_function_元组中
pyspark|u 1 |保存(状态)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第286行,保存
pyspark_1 | f(self,obj)#使用显式self调用未绑定方法
pyspark|1文件“/usr/lib/python2.7/pickle.py”,第655行,保存目录
pyspark_1| self._batch_setitems(obj.iteritems())
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,第687行,在批处理设置项中
Pybark|u 1 |保存(v)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第286行,保存
pyspark_1 | f(self,obj)#使用显式self调用未绑定方法
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,保存列表第606行
pyspark|u 1 | self.| u批次|u附录(iter(obj))
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第642行,在批处理附录中
pyspark_1|保存(tmp[0])
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第286行,保存
pyspark_1 | f(self,obj)#使用显式self调用未绑定方法
pyspark_1|文件“/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py”,第660行,在save_instancemethod中
Pybark_1 | obj=obj)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第401行,在save_中
pyspark_1|保存(args)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第286行,保存
pyspark_1 | f(self,obj)#使用显式self调用未绑定方法
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,第554行,在save_元组中
pyspark_1|保存(元素)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第331行,保存
pyspark_1 |自我保存_减少(obj=obj,*rv)
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,第425行,在save_中
pyspark|u 1 |保存(状态)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第286行,保存
pyspark_1 | f(self,obj)#使用显式self调用未绑定方法
pyspark|1文件“/usr/lib/python2.7/pickle.py”,第655行,保存目录
pyspark_1| self._batch_setitems(obj.iteritems())
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,第687行,在批处理设置项中
Pybark|u 1 |保存(v)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第286行,保存
pyspark_1 | f(self,obj)#使用显式self调用未绑定方法
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,保存列表第606行
pyspark|u 1 | self.| u批次|u附录(iter(obj))
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第642行,在批处理附录中
pyspark_1|保存(tmp[0])
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第331行,保存
pyspark_1 |自我保存_减少(obj=obj,*rv)
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,第425行,在save_中
pyspark|u 1 |保存(状态)
pyspark_1|文件“/usr/lib/python2.7/pickle.py”,第286行,保存
pyspark_1 | f(self,obj)#使用显式self调用未绑定方法
pyspark|1文件“/usr/lib/python2.7/pickle.py”,第655行,保存目录
pyspark_1| self._batch_setitems(obj.iteritems())
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,第687行,在批处理设置项中
Pybark|u 1 |保存(v)
pyspark_1 |文件“/usr/lib/python2.7/pickle.py”,第306行,保存
pyspark_1 | rv=减少(self.proto)
pyspark_1|文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/ops/resource_variable_ops.py”,第1152行,在__
pyspark_1 |初始值=self.numpy(),
pyspark_1| File“/usr/local/lib/python2.7/dist packages/tensorflow/python/ops/resource_variable_ops.py”,第906行,单位为numpy
pyspark_1 |“numpy()仅在启用了“急切执行”时可用。”)
pyspark_1 | NotImplementedError:numpy()仅在启用“急切执行”时可用。
pyspark| 1 |回溯(最近一次呼叫最后一次):
pyspark_1 |文件“/home/ubuntu//spark.py”,第169行,在
pyspark_1 |已安装的_管道=管道。安装(系列df)
pyspark_1| File“/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/ml/base.py”,第132行,
pyspark_1      | 19/10/25 10:23:03 INFO SparkContext: Created broadcast 12 from broadcast at NativeMethodAccessorImpl.java:0
pyspark_1      | Traceback (most recent call last):
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/serializers.py", line 590, in dumps
pyspark_1      |     return cloudpickle.dumps(obj, 2)
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 863, in dumps
pyspark_1      |     cp.dump(obj)
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 260, in dump
pyspark_1      |     return Pickler.dump(self, obj)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 224, in dump
pyspark_1      |     self.save(obj)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1      |     f(self, obj) # Call unbound method with explicit self
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 568, in save_tuple
pyspark_1      |     save(element)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1      |     f(self, obj) # Call unbound method with explicit self
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 406, in save_function
pyspark_1      |     self.save_function_tuple(obj)
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 549, in save_function_tuple
pyspark_1      |     save(state)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1      |     f(self, obj) # Call unbound method with explicit self
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 655, in save_dict
pyspark_1      |     self._batch_setitems(obj.iteritems())
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 687, in _batch_setitems
pyspark_1      |     save(v)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1      |     f(self, obj) # Call unbound method with explicit self
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 606, in save_list
pyspark_1      |     self._batch_appends(iter(obj))
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 642, in _batch_appends
pyspark_1      |     save(tmp[0])
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1      |     f(self, obj) # Call unbound method with explicit self
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/cloudpickle.py", line 660, in save_instancemethod
pyspark_1      |     obj=obj)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 401, in save_reduce
pyspark_1      |     save(args)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1      |     f(self, obj) # Call unbound method with explicit self
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 554, in save_tuple
pyspark_1      |     save(element)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 331, in save
pyspark_1      |     self.save_reduce(obj=obj, *rv)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 425, in save_reduce
pyspark_1      |     save(state)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1      |     f(self, obj) # Call unbound method with explicit self
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 655, in save_dict
pyspark_1      |     self._batch_setitems(obj.iteritems())
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 687, in _batch_setitems
pyspark_1      |     save(v)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1      |     f(self, obj) # Call unbound method with explicit self
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 606, in save_list
pyspark_1      |     self._batch_appends(iter(obj))
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 642, in _batch_appends
pyspark_1      |     save(tmp[0])
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 331, in save
pyspark_1      |     self.save_reduce(obj=obj, *rv)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 425, in save_reduce
pyspark_1      |     save(state)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 286, in save
pyspark_1      |     f(self, obj) # Call unbound method with explicit self
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 655, in save_dict
pyspark_1      |     self._batch_setitems(obj.iteritems())
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 687, in _batch_setitems
pyspark_1      |     save(v)
pyspark_1      |   File "/usr/lib/python2.7/pickle.py", line 306, in save
pyspark_1      |     rv = reduce(self.proto)
pyspark_1      |   File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 1152, in __reduce__
pyspark_1      |     initial_value=self.numpy(),
pyspark_1      |   File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py", line 906, in numpy
pyspark_1      |     "numpy() is only available when eager execution is enabled.")
pyspark_1      | NotImplementedError: numpy() is only available when eager execution is enabled.
pyspark_1      | Traceback (most recent call last):
pyspark_1      |   File "/home/ubuntu/./spark.py", line 169, in <module>
pyspark_1      |     fitted_pipeline = pipeline.fit(train_df)
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/ml/base.py", line 132, in fit
pyspark_1      |     return self._fit(dataset)
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/ml/pipeline.py", line 109, in _fit
pyspark_1      |     model = stage.fit(dataset)
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/ml/base.py", line 132, in fit
pyspark_1      |     return self._fit(dataset)
pyspark_1      |   File "/usr/local/lib/python2.7/dist-packages/elephas/ml_model.py", line 92, in _fit
pyspark_1      |     validation_split=self.get_validation_split())
pyspark_1      |   File "/usr/local/lib/python2.7/dist-packages/elephas/spark_model.py", line 151, in fit
pyspark_1      |     self._fit(rdd, epochs, batch_size, verbose, validation_split)
pyspark_1      |   File "/usr/local/lib/python2.7/dist-packages/elephas/spark_model.py", line 188, in _fit
pyspark_1      |     gradients = rdd.mapPartitions(worker.train).collect()
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/rdd.py", line 816, in collect
pyspark_1      |     sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/rdd.py", line 2532, in _jrdd
pyspark_1      |     self._jrdd_deserializer, profiler)
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/rdd.py", line 2434, in _wrap_function
pyspark_1      |     pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/rdd.py", line 2420, in _prepare_for_python_RDD
pyspark_1      |     pickled_command = ser.dumps(command)
pyspark_1      |   File "/home/ubuntu/spark-2.4.4-bin-hadoop2.7/python/pyspark/serializers.py", line 600, in dumps
pyspark_1      |     raise pickle.PicklingError(msg)
pyspark_1      | cPickle.PicklingError: Could not serialize object: NotImplementedError: numpy() is only available when eager execution is enabled.