Python 从autoencoder获取瓶颈层的输出

Python 从autoencoder获取瓶颈层的输出,python,machine-learning,deep-learning,autoencoder,encoder,Python,Machine Learning,Deep Learning,Autoencoder,Encoder,我不熟悉自动编码器。我构建了一个简单的卷积自动编码器,如下所示: # ENCODER input_img = Input(shape=(64, 64, 1)) encode1 = Conv2D(32, (3, 3), activation=tf.nn.leaky_relu, padding='same')(input_img) encode2 = MaxPooling2D((2, 2), padding='same')(encode1) l = Flatten()(encode2) l =

我不熟悉自动编码器。我构建了一个简单的卷积自动编码器,如下所示:

# ENCODER
input_img = Input(shape=(64, 64, 1))

encode1 = Conv2D(32, (3, 3), activation=tf.nn.leaky_relu, padding='same')(input_img) 
encode2 = MaxPooling2D((2, 2), padding='same')(encode1)
l = Flatten()(encode2)
l = Dense(100, activation='linear')(l)

# DECODER
d = Dense(1024, activation='linear')(l) 
d = Reshape((32,32,1))(d)
decode3 = Conv2D(64, (3, 3), activation=tf.nn.leaky_relu, padding='same')(d) 
decode4 = UpSampling2D((2, 2))(decode3)

model = models.Model(input_img, decode4)

model.compile(optimizer='adam', loss='mse')

# Train it by providing training images
model.fit(x, y, epochs=20, batch_size=16)
现在,在训练这个模型之后,我想从瓶颈层即密集层获得输出。这意味着,如果我将shape数组(1000,64,64)抛出到模型中,我需要压缩的shape数组(1000100)

我尝试了如下所示的一种方法,但它给了我一些错误

model = Model(inputs=[x], outputs=[l])
错误:


我也尝试过其他方法,但也不起作用。有人能告诉我如何在训练模型后恢复压缩阵列。

您需要为
编码器创建单独的模型。在对整个系统进行训练后,
编码器-解码器
,您只能使用
编码器
进行预测。代码示例:

# ENCODER
input_img = layers.Input(shape=(64, 64, 1))
encode1 = layers.Conv2D(32, (3, 3), activation=tf.nn.leaky_relu, padding='same')(input_img) 
encode2 = layers.MaxPooling2D((2, 2), padding='same')(encode1)
l = layers.Flatten()(encode2)
encoder_output = layers.Dense(100, activation='linear')(l)

# DECODER
d = layers.Dense(1024, activation='linear')(encoder_output) 
d = layers.Reshape((32,32,1))(d)
decode3 = layers.Conv2D(64, (3, 3), activation=tf.nn.leaky_relu, padding='same')(d) 
decode4 = layers.UpSampling2D((2, 2))(decode3)

model_encoder = Model(input_img, encoder_output)
model = Model(input_img, decode4)

model.fit(X, y, epochs=20, batch_size=16)
model\u编码器。predict(X)
应该为每个图像返回一个向量

# ENCODER
input_img = layers.Input(shape=(64, 64, 1))
encode1 = layers.Conv2D(32, (3, 3), activation=tf.nn.leaky_relu, padding='same')(input_img) 
encode2 = layers.MaxPooling2D((2, 2), padding='same')(encode1)
l = layers.Flatten()(encode2)
encoder_output = layers.Dense(100, activation='linear')(l)

# DECODER
d = layers.Dense(1024, activation='linear')(encoder_output) 
d = layers.Reshape((32,32,1))(d)
decode3 = layers.Conv2D(64, (3, 3), activation=tf.nn.leaky_relu, padding='same')(d) 
decode4 = layers.UpSampling2D((2, 2))(decode3)

model_encoder = Model(input_img, encoder_output)
model = Model(input_img, decode4)

model.fit(X, y, epochs=20, batch_size=16)