Deep learning 生成自动编码器时收到错误

Deep learning 生成自动编码器时收到错误,deep-learning,lstm,recurrent-neural-network,autoencoder,Deep Learning,Lstm,Recurrent Neural Network,Autoencoder,我正在尝试为我的学期项目构建一个自动编码器,使用CNN作为编码器,LSTM作为解码器,无论何时显示模型摘要。我收到以下错误: ValueError:输入0与层lstm_10不兼容:预期ndim=3,发现ndim=2 我已经尝试过更改LSTM输入的形状,但没有成功 def keras_model(image_x, image_y): model = Sequential() model.add(Lambda(lambda x: x / 127.5 - 1., input_shape=(image

我正在尝试为我的学期项目构建一个自动编码器,使用CNN作为编码器,LSTM作为解码器,无论何时显示模型摘要。我收到以下错误:

ValueError:输入0与层lstm_10不兼容:预期ndim=3,发现ndim=2

我已经尝试过更改LSTM输入的形状,但没有成功

def keras_model(image_x, image_y):

model = Sequential()
model.add(Lambda(lambda x: x / 127.5 - 1., input_shape=(image_x, image_y, 1)))

last = model.output
x = Conv2D(3, (3, 3), padding='same')(last)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D((2, 2), padding='valid')(x)

encoded= Flatten()(x)
x = LSTM(8, return_sequences=True, input_shape=(100,100))(encoded)
decoded = LSTM(64, return_sequences = True)(x)

x = Dropout(0.5)(decoded)
x = Dense(400, activation='relu')(x)
x = Dense(25, activation='relu')(x)
final = Dense(1, activation='relu')(x)

autoencoder = Model(model.input, final)

autoencoder.compile(optimizer="Adam", loss="mse")
autoencoder.summary()

model= keras_model(100, 100)

如果您使用的是LSTM,则需要一个时间维度。所以你的输入形状应该是:(时间,图像x,图像y,nb\u图像信道)

我建议对自动编码器、LSTM和2D卷积有更深入的了解,因为所有这些都在这里发挥作用。这是一个有用的介绍:和这个)

再看看这个例子,有人用Conv2D实现了一个LSTM。时间分布层在这里很有用

但是,为了修复错误,您可以添加一个重塑()层来模拟额外的尺寸:

def keras_model(image_x, image_y):

    model = Sequential()
    model.add(Lambda(lambda x: x / 127.5 - 1., input_shape=(image_x, image_y, 1)))

    last = model.output
    x = Conv2D(3, (3, 3), padding='same')(last)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = MaxPooling2D((2, 2), padding='valid')(x)

    encoded= Flatten()(x)
    # (50,50,3) is the output shape of the max pooling layer (see model summary)
    encoded = Reshape((50*50*3, 1))(encoded)
    x = LSTM(8, return_sequences=True)(encoded)  # input shape can be removed
    decoded = LSTM(64, return_sequences = True)(x)

    x = Dropout(0.5)(decoded)
    x = Dense(400, activation='relu')(x)
    x = Dense(25, activation='relu')(x)
    final = Dense(1, activation='relu')(x)

    autoencoder = Model(model.input, final)

    autoencoder.compile(optimizer="Adam", loss="mse")
    print(autoencoder.summary())

model= keras_model(100, 100)

如果您使用的是LSTM,则需要一个时间维度。所以你的输入形状应该是:(时间,图像x,图像y,nb\u图像信道)

我建议对自动编码器、LSTM和2D卷积有更深入的了解,因为所有这些都在这里发挥作用。这是一个有用的介绍:和这个)

再看看这个例子,有人用Conv2D实现了一个LSTM。时间分布层在这里很有用

但是,为了修复错误,您可以添加一个重塑()层来模拟额外的尺寸:

def keras_model(image_x, image_y):

    model = Sequential()
    model.add(Lambda(lambda x: x / 127.5 - 1., input_shape=(image_x, image_y, 1)))

    last = model.output
    x = Conv2D(3, (3, 3), padding='same')(last)
    x = BatchNormalization()(x)
    x = Activation('relu')(x)
    x = MaxPooling2D((2, 2), padding='valid')(x)

    encoded= Flatten()(x)
    # (50,50,3) is the output shape of the max pooling layer (see model summary)
    encoded = Reshape((50*50*3, 1))(encoded)
    x = LSTM(8, return_sequences=True)(encoded)  # input shape can be removed
    decoded = LSTM(64, return_sequences = True)(x)

    x = Dropout(0.5)(decoded)
    x = Dense(400, activation='relu')(x)
    x = Dense(25, activation='relu')(x)
    final = Dense(1, activation='relu')(x)

    autoencoder = Model(model.input, final)

    autoencoder.compile(optimizer="Adam", loss="mse")
    print(autoencoder.summary())

model= keras_model(100, 100)

谢谢。您建议在图层之前使用TimeDistributed吗?是的,建议将其用作图层的包装器谢谢。您建议在层之前使用TimeDistributed吗?是建议将其用作层的包装器