Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/292.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/date/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 错误:";无法创建链接(名称已存在)";保存由两个相同的预训练模型组成的整个模型时_Python_Tensorflow_Keras_H5py - Fatal编程技术网

Python 错误:";无法创建链接(名称已存在)";保存由两个相同的预训练模型组成的整个模型时

Python 错误:";无法创建链接(名称已存在)";保存由两个相同的预训练模型组成的整个模型时,python,tensorflow,keras,h5py,Python,Tensorflow,Keras,H5py,我有一个简单的keras模型,它由两个相同的预训练模型(EfficientB2)组成 当我想保存整个模型(带优化器状态的权重)时,会出现以下错误: Found 964 validated image filenames belonging to 2 classes. Found 964 validated image filenames belonging to 2 classes. Epoch 1/2 120/120 [==============================] - ETA

我有一个简单的keras模型,它由两个相同的预训练模型(EfficientB2)组成

当我想保存整个模型(带优化器状态的权重)时,会出现以下错误:

Found 964 validated image filenames belonging to 2 classes.
Found 964 validated image filenames belonging to 2 classes.
Epoch 1/2
120/120 [==============================] - ETA: 0s - loss: 13.4150 - accuracy: 0.5128Found 297 validated image filenames belonging to 2 classes.
Found 297 validated image filenames belonging to 2 classes.
120/120 [==============================] - 90s 466ms/step - loss: 13.4148 - accuracy: 0.5127 - val_loss: 13.7815 - val_accuracy: 0.4626

Epoch 00001: saving model to /content/drive/My Drive/web_crawling/weightst-01-0.4626-13.7815.hdf5
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-132-adc5fc37359d> in <module>()
      1 # model.fit([x_train1,x_train2],y_train,batch_size=4,epochs=10,validation_split=0.1,shuffle=True,callbacks=callbacks_list)
----> 2 model.fit(train_generator,epochs=2,steps_per_epoch = tr_sample // batch_size, validation_data = validation_generator,validation_steps = val_sample // batch_size,callbacks=callbacks_list)#,class_weight=class_weight)

9 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1143           epoch_logs.update(val_logs)
   1144 
-> 1145         callbacks.on_epoch_end(epoch, epoch_logs)
   1146         training_logs = epoch_logs
   1147         if self.stop_training:

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py in on_epoch_end(self, epoch, logs)
    426     for callback in self.callbacks:
    427       if getattr(callback, '_supports_tf_logs', False):
--> 428         callback.on_epoch_end(epoch, logs)
    429       else:
    430         if numpy_logs is None:  # Only convert once.

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py in on_epoch_end(self, epoch, logs)
   1342     # pylint: disable=protected-access
   1343     if self.save_freq == 'epoch':
-> 1344       self._save_model(epoch=epoch, logs=logs)
   1345 
   1346   def _should_save_on_batch(self, batch):

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py in _save_model(self, epoch, logs)
   1406                 filepath, overwrite=True, options=self._options)
   1407           else:
-> 1408             self.model.save(filepath, overwrite=True, options=self._options)
   1409 
   1410         self._maybe_remove_file()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
   2000     # pylint: enable=line-too-long
   2001     save.save_model(self, filepath, overwrite, include_optimizer, save_format,
-> 2002                     signatures, options, save_traces)
   2003 
   2004   def save_weights(self,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
    152           'or using `save_weights`.')
    153     hdf5_format.save_model_to_hdf5(
--> 154         model, filepath, overwrite, include_optimizer)
    155   else:
    156     saved_model_save.save(model, filepath, overwrite, include_optimizer,

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in save_model_to_hdf5(model, filepath, overwrite, include_optimizer)
    129     if (include_optimizer and model.optimizer and
    130         not isinstance(model.optimizer, optimizer_v1.TFOptimizer)):
--> 131       save_optimizer_weights_to_hdf5_group(f, model.optimizer)
    132 
    133     f.flush()

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in save_optimizer_weights_to_hdf5_group(hdf5_group, optimizer)
    594     for name, val in zip(weight_names, weight_values):
    595       param_dset = weights_group.create_dataset(
--> 596           name, val.shape, dtype=val.dtype)
    597       if not val.shape:
    598         # scalar

/usr/local/lib/python3.7/dist-packages/h5py/_hl/group.py in create_dataset(self, name, shape, dtype, data, **kwds)
    137             dset = dataset.Dataset(dsid)
    138             if name is not None:
--> 139                 self[name] = dset
    140             return dset
    141 

/usr/local/lib/python3.7/dist-packages/h5py/_hl/group.py in __setitem__(self, name, obj)
    371 
    372             if isinstance(obj, HLObject):
--> 373                 h5o.link(obj.id, self.id, name, lcpl=lcpl, lapl=self._lapl)
    374 
    375             elif isinstance(obj, SoftLink):

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/h5o.pyx in h5py.h5o.link()

RuntimeError: Unable to create link (name already exists)
如果我把save_weights_only改为True,一切都很好。我知道这个问题与保存优化器参数有关,但我不知道如何修复这个错误并保存整个模型

附言:在编译之前,我还为完整的模型和其中一个模型(effb3_1)编写了以下代码,但没有解决问题

for i in range(len(model.weights)):
    model.weights[i]._handle_name = model.weights[i].name + "_" + str(i)
谷歌合作
TF 2.4

不仅要更改权重名称,还要更改包括偏差在内的所有变量

for v in model.variables:
    v._handle_name = v.name + '_'

我做了,但错误没有解决。我认为这个错误与体重和偏见无关。因为当“仅保存权重”设置为true时,一切都正常。我猜这个错误是由于优化器参数名引起的。
for v in model.variables:
    v._handle_name = v.name + '_'