Machine learning 如何将一个模型的中间层传递到另一个模型,以便在PyTorch中跳过连接

Machine learning 如何将一个模型的中间层传递到另一个模型,以便在PyTorch中跳过连接,machine-learning,computer-vision,pytorch,autoencoder,generative-adversarial-network,Machine Learning,Computer Vision,Pytorch,Autoencoder,Generative Adversarial Network,我想将编码器-解码器体系结构定义为两个独立的模型,然后使用nn.Sequential将它们连接起来,如下面的代码所示。现在,假设我想连接/连接编码器conv4块的输出到解码器的DECOV1块,作为跳过连接。有没有一种方法可以在不将两种编码器和解码器合并为一种模式的情况下实现这一点。我想将它们分开,以便能够使用同一编码器的输出作为多个解码器的输入 class Encoder(nn.Module): def __init__(self, conv_dim=64, n_res_blocks=

我想将编码器-解码器体系结构定义为两个独立的模型,然后使用nn.Sequential将它们连接起来,如下面的代码所示。现在,假设我想连接/连接编码器conv4块的输出到解码器的DECOV1块,作为跳过连接。有没有一种方法可以在不将两种编码器和解码器合并为一种模式的情况下实现这一点。我想将它们分开,以便能够使用同一编码器的输出作为多个解码器的输入

class Encoder(nn.Module):

    def __init__(self, conv_dim=64, n_res_blocks=2):
        super(Encoder, self).__init__()

        # Define the encoder
        self.conv1 = conv(3, conv_dim, 4)
        self.conv2 = conv(conv_dim, conv_dim*2, 4)
        self.conv3 = conv(conv_dim*2, conv_dim*4, 4)
        self.conv4 = conv(conv_dim*4, conv_dim*4, 4)

        # Define the resnet part of the encoder
        # Residual blocks
        res_layers = []
        for layer in range(n_res_blocks):
            res_layers.append(ResidualBlock(conv_dim*4))
        # use sequential to create these layers
        self.res_blocks = nn.Sequential(*res_layers)

        # leaky relu function
        self.leaky_relu = nn.LeakyReLU(negative_slope=0.2)

    def forward(self, x):
        # define feedforward behavior, applying activations as necessary
        conv1 = self.leaky_relu(self.conv1(x))
        conv2 = self.leaky_relu(self.conv2(conv1))
        conv3 = self.leaky_relu(self.conv3(conv2))
        conv4 = self.leaky_relu(self.conv4(conv3))

        out = self.res_blocks(conv4)

        return out

# Define the Decoder Architecture
class Decoder(nn.Module):

    def __init__(self, conv_dim=64, n_res_blocks=2):
        super(Decoder, self).__init__()

        # Define the resnet part of the decoder
        # Residual blocks
        res_layers = []
        for layer in range(n_res_blocks):
            res_layers.append(ResidualBlock(conv_dim*4))
        # use sequential to create these layers
        self.res_blocks = nn.Sequential(*res_layers)

        # Define the decoder 
        self.deconv1 = deconv(conv_dim*4, conv_dim*4, 4)
        self.deconv2 = deconv(conv_dim*4, conv_dim*2, 4)
        self.deconv3 = deconv(conv_dim*2, conv_dim, 4)
        self.deconv4 = deconv(conv_dim, conv_dim, 4)

        # no batch norm on last layer
        self.out_layer = deconv(conv_dim, 3, 1, stride=1, padding=0, normalization=False)

        # leaky relu function
        self.leaky_relu = nn.LeakyReLU(negative_slope=0.2)

    def forward(self, x):
        # define feedforward behavior, applying activations as necessary
        res = self.res_blocks(x)

        deconv1 = self.leaky_relu(self.deconv1(res))
        deconv2 = self.leaky_relu(self.deconv2(deconv1))
        deconv3 = self.leaky_relu(self.deconv3(deconv2))
        deconv4 = self.leaky_relu(self.deconv4(deconv3))

        # tanh applied to last layer
        out = F.tanh(self.out_layer(deconv4))
        out = torch.clamp(out, min=-0.5, max=0.5)

        return out

def model():

    enc = Encoder(conv_dim=64, n_res_blocks=2)
    dec = Decoder(conv_dim=64, n_res_blocks=2)
    return nn.Sequential(enc, dec)

与仅从编码器返回最后一层的潜在特征输出不同,您可以将中间层的输出与潜在特征一起作为列表返回。然后,在解码器的转发函数中,您可以访问编码器返回的值列表,这是解码器的参数,并在解码器层中相应地使用这些值

希望这一点有帮助