Python 如何向cifar10数据集中添加额外图像,以便在keras中学习?

Python 如何向cifar10数据集中添加额外图像,以便在keras中学习?,python,tensorflow,keras,Python,Tensorflow,Keras,我在Keras有CNN网络,是通过Cifar10学习的。现在,为了提高我的学习,我需要在学习过程中添加一些其他图像。例如,32x32块柯达数据库。但我不知道我该怎么做?因为我们只导入Cifar10,Keras知道它,我不知道在学习期间如何将我的图像块添加到Cifar10:你能帮我吗? 多谢各位 wtm=Input((32,32,1)) image = Input((32, 32, 1)) conv1 = Conv2D(64, (5, 5), activation='relu', padding=

我在Keras有CNN网络,是通过Cifar10学习的。现在,为了提高我的学习,我需要在学习过程中添加一些其他图像。例如,32x32块柯达数据库。但我不知道我该怎么做?因为我们只导入Cifar10,Keras知道它,我不知道在学习期间如何将我的图像块添加到Cifar10:你能帮我吗? 多谢各位

wtm=Input((32,32,1))
image = Input((32, 32, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e',)(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
#conv3 = Conv2D(8, (3, 3), activation='relu', padding='same', name='convl3e', kernel_initializer='Orthogonal',bias_initializer='glorot_uniform')(conv2)
BN=BatchNormalization()(conv3)
encoded =  Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)

#--------------------------------------------------------------
add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,wtm])

#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------
deconv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1d')(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='relu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='relu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv3)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd) 

model=Model(inputs=[image,wtm],outputs=decoded)

decoded_noise = GaussianNoise(0.5)(decoded)

#----------------------w extraction------------------------------------
convw1 = Conv2D(64, (3,3), activation='relu', padding='same', name='conl1w')(decoded_noise)
convw2 = Conv2D(64, (3, 3), activation='relu', padding='same', name='convl2w')(convw1)
convw3 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl3w')(convw2)
convw4 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl4w')(convw3)
convw5 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl5w')(convw4)
convw6 = Conv2D(64, (3, 3), activation='relu', padding='same', name='conl6w')(convw5)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(convw6)  
wext=Model(inputs=decoded_noise,outputs=pred_w)
w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])

#----------------------training the model--------------------------------------
#------------------------------------------------------------------------------
#----------------------Data preparesion-------------------------------------

def grayscale(data, dtype='float32'):
        # luma coding weighted average in video systems
        r, g, b = np.asarray(.3, dtype=dtype), np.asarray(.59, dtype=dtype), np.asarray(.11, dtype=dtype)
        rst = r * data[:, :, :, 0] + g * data[:, :, :, 1] + b * data[:, :, :, 2]
        # add channel dimension
        rst = np.expand_dims(rst, axis=3)
        return rst
    (x_train, _), (x_test, _) = cifar10.load_data()
    x_train=grayscale(x_train)
    x_test=grayscale(x_test)
    x_validation=x_train[1:10000,:,:]
    x_train=x_train[10001:50000,:,:]
    #
    x_train = x_train.astype('float32') / 255.
    x_test = x_test.astype('float32') / 255.
    x_validation = x_validation.astype('float32') / 255.
    x_train = np.reshape(x_train, (len(x_train), 32, 32, 1))  # adapt this if using `channels_first` image data format
    x_test = np.reshape(x_test, (len(x_test), 32, 32, 1))  # adapt this if using `channels_first` image data format
    x_validation = np.reshape(x_validation, (len(x_validation), 32, 32, 1))

    #---------------------compile and train the model------------------------------
    w_extraction.compile(optimizer='adam', loss={'decoder_output':'mse','reconstructed_W':'binary_crossentropy'}, loss_weights={'decoder_output': 0.45, 'reconstructed_W': 1.0},metrics=['mae'])
    es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=20)
    mc = ModelCheckpoint('best_model_5x5F_dil_Los751.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True)
    history=w_extraction.fit([x_train,w_expand], [x_train,w_expand],
              epochs=4000,
              batch_size=32, 
              validation_data=([x_validation,wv_expand], [x_validation,wv_expand]),
              callbacks=[TensorBoard(log_dir='E:/concatnatenetwork', histogram_freq=0, write_graph=False),es,mc])
    w_extraction.summary()
    WEIGHTS_FNAME = 'v1_adam_model_5x5F_add_dil_Los751.hdf'
    w_extraction.save_weights(WEIGHTS_FNAME, overwrite=True)

您只需修改训练集,假设您已将图像加载到x_new中,其中应包含形状样本32、32、3,然后您可以添加它们:

x_train = np.concatenate([x_train, x_new])

如果新图像具有不同的形状,则需要将其重新缩放到适当的大小。

请添加您正在使用的代码,问题只是将新图像连接到训练集中。我添加了代码。正如您在培训中看到的,我使用了cifar10.load_数据,但现在我需要在学习过程中添加一些除cifar10之外的额外图像。我不需要图像标签,因为像自动编码器一样,我们重建输入图像,不需要图像标签。现在,请告诉我如何在学习过程中添加其他图像?