Python 在卷积自动编码器中改变解码器输出形状以进行训练

Python 在卷积自动编码器中改变解码器输出形状以进行训练,python,keras,conv-neural-network,autoencoder,Python,Keras,Conv Neural Network,Autoencoder,我正在尝试为卷积自动编码器模型训练我的数据集。我已经修补了500400x400x1数据集。我的模型如下所示 input_img = Input(shape = (400, 400, 1)) def encoder(input_img): #encoder conv1 = Conv2D(4, (3, 3), activation='relu', padding='same')(input_img) conv1 = BatchNormalization()(conv1) pool1 = MaxP

我正在尝试为卷积自动编码器模型训练我的数据集。我已经修补了500400x400x1数据集。我的模型如下所示

input_img = Input(shape = (400, 400, 1))

def encoder(input_img):
#encoder
conv1 = Conv2D(4, (3, 3), activation='relu', padding='same')(input_img) 
conv1 = BatchNormalization()(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
conv2 = Conv2D(8, (3, 3), activation='relu', padding='same')(pool1)
conv2 = BatchNormalization()(conv2)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) 
conv3 = Conv2D(16, (3, 3), activation='relu', padding='same')(pool2) 
conv3 = BatchNormalization()(conv3)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
conv4 = Conv2D(32, (3, 3), activation='relu', padding='same')(pool3)
conv4 = BatchNormalization()(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(conv4)
conv5 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool4)
conv5 = BatchNormalization()(conv5)
pool5 = MaxPooling2D(pool_size=(2, 2))(conv5)
conv6 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool5)
conv6 = BatchNormalization()(conv6)
pool6 = MaxPooling2D(pool_size=(2, 2))(conv6)
conv7 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool6) 
conv7 = BatchNormalization()(conv7)
pool7 = MaxPooling2D(pool_size=(2, 2))(conv7)
conv8 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool7)
conv8 = BatchNormalization()(conv8)
return conv8


def decoder(conv8):    
#decoder
conv9 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv8) 
conv9 = BatchNormalization()(conv9)
up1 = UpSampling2D((2,2))(conv9) 
conv12 = Conv2D(256, (3, 3), activation='relu', padding='same')(up1)
conv12 = BatchNormalization()(conv12)
up2 = UpSampling2D((2,2))(conv12) 
conv13 = Conv2D(128, (3, 3), activation='relu', padding='same')(up2) 
conv13 = BatchNormalization()(conv13)
up3 = UpSampling2D((2,2))(conv13) 
conv14 = Conv2D(64, (3, 3), activation='relu', padding='same')(up3)
conv14 = BatchNormalization()(conv14)
up4 = UpSampling2D((2,2))(conv14) 
conv15 = Conv2D(32, (3, 3), activation='relu', padding='same')(up4)
conv15 = BatchNormalization()(conv15)
up5 = UpSampling2D((2,2))(conv15) 
conv16 = Conv2D(16, (3, 3), activation='relu', padding='same')(up5)
conv16 = BatchNormalization()(conv16)
up6 = UpSampling2D((2,2))(conv16) 
conv17 = Conv2D(8, (3, 3), activation='relu', padding='same')(up6) 
conv17 = BatchNormalization()(conv17)
up7 = UpSampling2D((2,2))(conv17) 
conv18 = Conv2D(4, (3, 3), activation='relu', padding='same')(up7) 
conv18 = BatchNormalization()(conv18)
decoded = Conv2D(1, (3,3), activation='relu', padding='same')(conv18)
return decoded

编码器处理后,形状变为3,3512。但当解码器处理完成后,形状会上升到384384,1。因为我的输入形状是400400,1,我无法训练模型。是否有任何上采样方法可以将我的解码器384384,1层更改为400400,1?

您可以增加中间Con2D层的填充以弥补16个像素。有关如何控制二维填充的信息,请参见


您也可以使用explicit。

首先感谢您的关注。我能换一下凯拉斯的填充物吗?此外,当我将padding更改为tuple或int'tuple'对象没有属性'lower'时,会出现错误。啊,是的,这是PyTorch文档,在问题中添加keras标记。如果问题是keras特定的,它可以帮助您找到正确的答案。但可以肯定的是,在keras/TensorFlow中必须有一个等效的填充