Python 如何将Keras合并层用于具有两个输出的自动编码器

Python 如何将Keras合并层用于具有两个输出的自动编码器,python,keras,deep-learning,keras-layer,autoencoder,Python,Keras,Deep Learning,Keras Layer,Autoencoder,假设我有两个输入:X和Y,我想设计并联合自动编码器来重建X'和Y' 与图中一样,X是音频输入,Y是视频输入。这种深层架构很酷,因为它有两个输入和两个输出。此外,它们在中间共享一些层。我的问题是如何使用Keras编写这个自动编码器。假设每个层除了中间的共享层外完全连接。< /P> 以下是我的代码: from keras.layers import Input, Dense from keras.models import Model import numpy as np X = np.r

假设我有两个输入:
X
Y
,我想设计并联合自动编码器来重建
X'
Y'

与图中一样,
X
是音频输入,
Y
是视频输入。这种深层架构很酷,因为它有两个输入和两个输出。此外,它们在中间共享一些层。我的问题是如何使用
Keras
编写这个自动编码器。假设每个层除了中间的共享层外完全连接。< /P> 以下是我的代码:

 from keras.layers import Input, Dense
 from keras.models import Model
 import numpy as np

 X = np.random.random((1000, 100))
 y = np.random.random((1000, 300))  # x and y can be different size

 # the X autoencoder layer 

 Xinput = Input(shape=(100,))

 encoded = Dense(50, activation='relu')(Xinput)
 encoded = Dense(20, activation='relu')(encoded)
 encoded = Dense(15, activation='relu')(encoded)

 decoded = Dense(20, activation='relu')(encoded)
 decoded = Dense(50, activation='relu')(decoded)
 decoded = Dense(100, activation='relu')(decoded)



 # the Y autoencoder layer 
 Yinput = Input(shape=(300,))

 encoded = Dense(120, activation='relu')(Yinput)
 encoded = Dense(50, activation='relu')(encoded)
 encoded = Dense(15, activation='relu')(encoded)

 decoded = Dense(50, activation='relu')(encoded)
 decoded = Dense(120, activation='relu')(decoded)
 decoded = Dense(300, activation='relu')(decoded)
我只是在中间有
15个
节点,用于
X
Y
。 我的问题是如何训练这个具有丢失功能的联合自动编码器
\\\X-X'\\^2+\\\124; Y-Y'\\^2


谢谢

让我澄清一下,您想在一个模型中创建两个输入层和两个输出层,并共享层,对吗

我想这可以给你一个想法:

from keras.layers import Input, Dense, Concatenate
from keras.models import Model
import numpy as np

X = np.random.random((1000, 100))
y = np.random.random((1000, 300))  # x and y can be different size

# the X autoencoder layer 
Xinput = Input(shape=(100,))

encoded_x = Dense(50, activation='relu')(Xinput)
encoded_x = Dense(20, activation='relu')(encoded_x)

# the Y autoencoder layer 
Yinput = Input(shape=(300,))

encoded_y = Dense(120, activation='relu')(Yinput)
encoded_y = Dense(50, activation='relu')(encoded_y)

# concatenate encoding layers
c_encoded = Concatenate(name="concat", axis=1)([encoded_x, encoded_y])
encoded = Dense(15, activation='relu')(c_encoded)

decoded_x = Dense(20, activation='relu')(encoded)
decoded_x = Dense(50, activation='relu')(decoded_x)
decoded_x = Dense(100, activation='relu')(decoded_x)

out_x = SomeOuputLayers(..)(decoded_x)

decoded_y = Dense(50, activation='relu')(encoded)
decoded_y = Dense(120, activation='relu')(decoded_y)
decoded_y = Dense(300, activation='relu')(decoded_y)

out_y = SomeOuputLayers(..)(decoded_y)

# Now you have two input and two output with shared layer
model = Model([Xinput, Yinput], [out_x, out_y])

你的代码有两个不同的模型。虽然您可以将共享表示层的输出两次用于以下两个子网,但必须合并两个子网以进行输入:

Xinput = Input(shape=(100,))
Yinput = Input(shape=(300,))

Xencoded = Dense(50, activation='relu')(Xinput)
Xencoded = Dense(20, activation='relu')(Xencoded)


Yencoded = Dense(120, activation='relu')(Yinput)
Yencoded = Dense(50, activation='relu')(Yencoded)

shared_input = Concatenate()([Xencoded, Yencoded])
shared_output = Dense(15, activation='relu')(shared_input)

Xdecoded = Dense(20, activation='relu')(shared_output)
Xdecoded = Dense(50, activation='relu')(Xdecoded)
Xdecoded = Dense(100, activation='relu')(Xdecoded)

Ydecoded = Dense(50, activation='relu')(shared_output)
Ydecoded = Dense(120, activation='relu')(Ydecoded)
Ydecoded = Dense(300, activation='relu')(Ydecoded)
现在您有两个独立的输出。因此,您需要两个单独的损失函数,这两个函数无论如何都要添加,以便编译模型:

model = Model([Xinput, Yinput], [Xdecoded, Ydecoded])
model.compile(optimizer='adam', loss=['mse', 'mse'], loss_weights=[1., 1.])
然后,您可以通过以下方式简单地训练模型:

model.fit([X_input, Y_input], [X_label, Y_label])