Keras 创建h5文件(hdf文件)时出错

Keras 创建h5文件(hdf文件)时出错,keras,keras-layer,keras-2,Keras,Keras Layer,Keras 2,对于以下代码,我已将模型权重保存在mnist_weights 1234.h5中。并希望使用相同的层配置创建相同的文件,如mnist_weights1234.h5 import keras from __future__ import print_function from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten

对于以下代码,我已将模型权重保存在mnist_weights 1234.h5中。并希望使用相同的层配置创建相同的文件,如mnist_weights1234.h5

import keras
from __future__ import print_function
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
import numpy as np
from sklearn.model_selection import train_test_split

batch_size = 128
num_classes = 3
epochs = 1

# input image dimensions
img_rows, img_cols = 28, 28

#Just for reducing data set 
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x1_train=x_train[y_train==0]; y1_train=y_train[y_train==0]
x1_test=x_test[y_test==0];y1_test=y_test[y_test==0]
x2_train=x_train[y_train==1];y2_train=y_train[y_train==1]
x2_test=x_test[y_test==1];y2_test=y_test[y_test==1]
x3_train=x_train[y_train==2];y3_train=y_train[y_train==2]
x3_test=x_test[y_test==2];y3_test=y_test[y_test==2]

X=np.concatenate((x1_train,x2_train,x3_train,x1_test,x2_test,x3_test),axis=0)
Y=np.concatenate((y1_train,y2_train,y3_train,y1_test,y2_test,y3_test),axis=0)

# the data, shuffled and split between train and test sets
x_train, x_test, y_train, y_test = train_test_split(X,Y)

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(1, kernel_size=(2, 2),
                 activation='relu',
                 input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(16,16)))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])

model.save_weights('mnist_weights1234.h5')
现在我想创建像mnist_weights.h5这样的文件。所以我使用下面的代码并得到错误

hf = h5py.File('mnist_weights12356.h5', 'w')
hf.create_dataset('conv2d_2/conv2d_2/bias', data=weights[0])
hf.create_dataset('conv2d_2/conv2d_2/kernel', data=weights[1])
hf.create_dataset('dense_2/dense_2/bias', data=weights[2])
hf.create_dataset('dense_2/dense_2/kernel', data=weights[3])
hf.create_dataset('flatten_2', data=None)
hf.create_dataset('max_pooling_2d_2', data=None)
hf.close()
但获取以下错误:TypeError:必须指定数据、形状或数据类型之一。
如何解决问题

错误消息包含您的解决方案。在这些方面:

hf.create_dataset('flatten_2', data=None)
hf.create_dataset('max_pooling_2d_2', data=None)
您提供的数据等于零。要创建数据集,HDF5库需要最少的信息,如错误所述,您需要提供一个数据类型(数据集元素的数据类型),或者一个非无数据参数(用于推断形状),或者一个形状参数。您没有给出这些,因此错误是正确的


只需在创建数据集的调用中提供足够的信息即可创建数据集。

如果要使用numpy数组中的权重,只需在层中设置权重:

model.get_layer('conv2d_2').set_weights([weights[1],weights[0]])
model.get_layer('dense_2').set_weights([weights[3],weights[2]])
如果阵列存储在文件中:

array = numpy.load('arrayfile.npy')
可以将整个模型权重保存为numpy阵列:

numpy.save('weights.npy', model.get_weights())
model.set_weights(numpy.load('weights.npy'))

为什么要在可以执行
model.save_weights()
时创建此文件?我想从外部导入存储在numpy array中的权重。我知道错误行。事实上,Flatten和Maxpooling没有任何权重,所以我可以提供什么信息。@Hitesh你看看Keras已经做了什么,看看hdf5文件中的h5dump。对我来说,我看到数据集的形状是(0,0)。是的,我也这么想。我会尽力通知你的