Python ValueError:无法使用Tensorflow 2.3解释优化器标识符:False

Python ValueError:无法使用Tensorflow 2.3解释优化器标识符:False,python,tensorflow,machine-learning,elephas,Python,Tensorflow,Machine Learning,Elephas,我正在使用Tensorflow 2.3,并尝试初始化以下LSTM from keras.layers import Dense, Activation,Input, LSTM, Dropout from keras.optimizers import Adam from keras.models import Model, Sequential def create_model() -> Model: """ Create the Deep

我正在使用Tensorflow 2.3,并尝试初始化以下LSTM

from keras.layers import Dense, Activation,Input, LSTM, Dropout
from keras.optimizers import Adam
from keras.models import Model, Sequential

def create_model() -> Model:
    """
    Create the Deep Learning model
    :return the created model
    """
    input_student = Input(shape=(360,97,), dtype='float')


    lstm = LSTM(
          units=97,
          dropout=0.5,
          recurrent_dropout=0.5,
          return_sequences=False,
          return_state=False
      )(input_student)
    print(lstm)
    lstm = Dropout(0.5)(lstm)
    output = Dense(1, activation="sigmoid")(lstm)

    optim = Adam(lr=0.001)
    model = Model(inputs=input_student, outputs=output)
    model.compile(
      loss="binary_crossentropy", optimizer=optim
    )

    model.summary()
    return model
如果我尝试使用Elephas来训练网络,我会得到以下错误

>>> Fit model
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-18-fcce77bcaaa0> in <module>()
----> 1 spark_model.fit(rdd, epochs=5, batch_size=32, verbose=1, validation_split=0.3)

7 frames
/usr/local/lib/python3.6/dist-packages/elephas/spark_model.py in fit(self, rdd, epochs, batch_size, verbose, validation_split)
    149 
    150         if self.mode in ['asynchronous', 'synchronous', 'hogwild']:
--> 151             self._fit(rdd, epochs, batch_size, verbose, validation_split)
    152         else:
    153             raise ValueError(

/usr/local/lib/python3.6/dist-packages/elephas/spark_model.py in _fit(self, rdd, epochs, batch_size, verbose, validation_split)
    159         self._master_network.compile(optimizer=self.master_optimizer,
    160                                      loss=self.master_loss,
--> 161                                      metrics=self.master_metrics)
    162         if self.mode in ['asynchronous', 'hogwild']:
    163             self.start_server()

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, weighted_metrics, run_eagerly, **kwargs)
    539       self._run_eagerly = run_eagerly
    540 
--> 541       self.optimizer = self._get_optimizer(optimizer)
    542       self.compiled_loss = compile_utils.LossesContainer(
    543           loss, loss_weights, output_names=self.output_names)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _get_optimizer(self, optimizer)
    565       return opt
    566 
--> 567     return nest.map_structure(_get_single_optimizer, optimizer)
    568 
    569   @trackable.no_automatic_dependency_tracking

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py in map_structure(func, *structure, **kwargs)
    633 
    634   return pack_sequence_as(
--> 635       structure[0], [func(*x) for x in entries],
    636       expand_composites=expand_composites)
    637 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py in <listcomp>(.0)
    633 
    634   return pack_sequence_as(
--> 635       structure[0], [func(*x) for x in entries],
    636       expand_composites=expand_composites)
    637 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in _get_single_optimizer(opt)
    559 
    560     def _get_single_optimizer(opt):
--> 561       opt = optimizers.get(opt)
    562       if (self._dtype_policy.loss_scale is not None and
    563           not isinstance(opt, lso.LossScaleOptimizer)):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizers.py in get(identifier)
    901   else:
    902     raise ValueError(
--> 903         'Could not interpret optimizer identifier: {}'.format(identifier))

ValueError: Could not interpret optimizer identifier: False
>>适合车型
---------------------------------------------------------------------------
ValueError回溯(最近一次调用上次)
在()
---->1 spark_model.fit(rdd,epochs=5,批量大小=32,详细度=1,验证分割=0.3)
7帧
/usr/local/lib/python3.6/dist-packages/elephas/spark_model.py in fit(self、rdd、epochs、批量大小、详细、验证分割)
149
150如果在['asynchronous'、'synchronous'、'hogwild']中的self.mode:
-->151自适配(rdd、年代、批量大小、详细、验证分割)
152.其他:
153升值误差(
/usr/local/lib/python3.6/dist-packages/elephas/spark_model.py in_fit(自我、rdd、时代、批量大小、详细、验证分割)
159 self.\u master\u network.compile(优化器=self.master\u优化器,
160损失=自我控制损失,
-->161指标=自我控制(主指标)
162如果在['asynchronous','hogwild']中的self.mode:
163 self.start_server()
/编译中的usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py(self、优化器、损耗、度量、损耗权重、加权度量、热切运行,**kwargs)
539 self.\u run\u急切地=急切地跑
540
-->541 self.optimizer=self.\u get\u优化器(优化器)
542 self.compiled_loss=compile_utils.LossesContainer(
543损耗,损耗重量,输出名称=自身。输出名称)
/优化器中的usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py(self,优化器)
565返回选项
566
-->567 return nest.map_结构(_get_single_优化器,优化器)
568
569@trackable.no\自动\依赖\跟踪
/映射结构中的usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py(func,*structure,**kwargs)
633
634返回包\u序列\u组件(
-->635结构[0],[func(*x)表示条目中的x],
636扩展_复合材料=扩展_复合材料)
637
/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/nest.py in(.0)
633
634返回包\u序列\u组件(
-->635结构[0],[func(*x)表示条目中的x],
636扩展_复合材料=扩展_复合材料)
637
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in\u get\u single\u optimizer(opt)
559
560 def_获取_单个_优化器(opt):
-->561 opt=优化器.get(opt)
562如果(自身.\u数据类型\u policy.loss\u比例不是无且
563不存在(opt,lso.lossCaleOptimizer)):
/get(标识符)中的usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizers.py
901其他:
902提升值错误(
-->903'无法解释优化器标识符:{}。格式(标识符))
ValueError:无法解释优化器标识符:False
但我真的不明白发生了什么,因为我使用的不是从Keras进口的,而是从Tensorflow进口的,正如其他答案所指出的那样。

有人能帮我吗?

我试图在Tensorflow 2.3.0的最新1.0.0版本中复制这个问题,但我做不到。在此版本之前,Elephas与Tensorflow 2.x API并不完全兼容,所以我建议重试。

您问题中的代码似乎与错误完全无关,一切都从spark_model.fit开始(rdd,epochs=5,batch_size=32,verbose=1,validation_split=0.3),其中优化器来自self.master_优化器,因此您必须提供更多信息以帮助您