Python 3.x 深度卷积自动编码器问题-编码维度太大

Python 3.x 深度卷积自动编码器问题-编码维度太大,python-3.x,keras,deep-learning,conv-neural-network,autoencoder,Python 3.x,Keras,Deep Learning,Conv Neural Network,Autoencoder,我最近建立了一个卷积自动编码器,在此基础上我建立了其他网络的吊索。我现在才意识到我犯了一个根本性的错误(早就应该看到)。我认为我的编码层(即标记为“编码器”的maxpooling层的输出,见下文)具有“encoding_dim”维度。然而,它比我想象的要多得多。我想要144,但我得到了144 x 12 x 12(实际上比输入的尺寸大:48x48x3) 这是自动编码器的代码: 建筑学 这是型号摘要(图像尺寸为48-->48x48x3大小的图像,编码尺寸为144): 模型摘要 这也让我在其他网络方面

我最近建立了一个卷积自动编码器,在此基础上我建立了其他网络的吊索。我现在才意识到我犯了一个根本性的错误(早就应该看到)。我认为我的编码层(即标记为“编码器”的maxpooling层的输出,见下文)具有“encoding_dim”维度。然而,它比我想象的要多得多。我想要144,但我得到了144 x 12 x 12(实际上比输入的尺寸大:48x48x3)

这是自动编码器的代码:

建筑学 这是型号摘要(图像尺寸为48-->48x48x3大小的图像,编码尺寸为144):

模型摘要 这也让我在其他网络方面陷入了困境,所以我需要调整架构并重新培训一切

有人能告诉我哪里出了问题,更重要的是,我如何调整过滤器/内核,以确保我的编码层确实具有“encoding_dim”维度吗

input_shape = (image_dim, image_dim, 3)

# Build model
autoencoder = Sequential()
autoencoder.add(Conv2D(encoding_dim, (3, 3), padding='same', activation='relu', input_shape=input_shape,
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())
autoencoder.add(MaxPooling2D((2, 2), padding='same'))

autoencoder.add(Conv2D(encoding_dim, (3, 3), padding='same', activation='relu',
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())
autoencoder.add(MaxPooling2D((2, 2), padding='same', name='encoder'))

autoencoder.add(Conv2D(encoding_dim, (3, 3), padding='same', activation='relu',
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())
autoencoder.add(UpSampling2D((2, 2)))

autoencoder.add(Conv2D(encoding_dim, (3, 3), padding='same', activation='relu',
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())
autoencoder.add(UpSampling2D((2, 2)))

autoencoder.add(Conv2D(3, (10, 10), padding='same', activation='sigmoid',
                       kernel_initializer='random_uniform', bias_initializer='zeros'))
autoencoder.add(BatchNormalization())
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 48, 48, 144)       4032      
_________________________________________________________________
batch_normalization_1 (Batch (None, 48, 48, 144)       576       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 24, 24, 144)       0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 24, 24, 144)       186768    
_________________________________________________________________
batch_normalization_2 (Batch (None, 24, 24, 144)       576       
_________________________________________________________________
encoder (MaxPooling2D)       (None, 12, 12, 144)       0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 12, 12, 144)       186768    
_________________________________________________________________
batch_normalization_3 (Batch (None, 12, 12, 144)       576       
_________________________________________________________________
up_sampling2d_1 (UpSampling2 (None, 24, 24, 144)       0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 24, 24, 144)       186768    
_________________________________________________________________
batch_normalization_4 (Batch (None, 24, 24, 144)       576       
_________________________________________________________________
up_sampling2d_2 (UpSampling2 (None, 48, 48, 144)       0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 48, 48, 3)         43203     
_________________________________________________________________
batch_normalization_5 (Batch (None, 48, 48, 3)         12        
=================================================================
Total params: 609,855
Trainable params: 608,697
Non-trainable params: 1,158