Python 用于多类分类的去噪自动编码器测试模式

Python 用于多类分类的去噪自动编码器测试模式,python,deep-learning,classification,pytorch,autoencoder,Python,Deep Learning,Classification,Pytorch,Autoencoder,我正在为一个多类别分类问题培训一个自动编码器,在这个问题上,我发送16条等概率消息,并通过去噪自动编码器发送它们以接收它们。我试图实现本文中的结果(图3b的修改),具体来说:模型请参考中的图2 这是我的自动编码器类: class FullyConnectedAutoencoder(nn.Module): def __init__(self, k, n_channel, EbN0_dB): self.k = k self.n_channel = n_chan

我正在为一个多类别分类问题培训一个自动编码器,在这个问题上,我发送16条等概率消息,并通过去噪自动编码器发送它们以接收它们。我试图实现本文中的结果(图3b的修改),具体来说:模型请参考中的图2

这是我的自动编码器类:

class FullyConnectedAutoencoder(nn.Module):
    def __init__(self, k, n_channel, EbN0_dB):
        self.k = k
        self.n_channel = n_channel
        self.EbN0_dB = EbN0_dB

        super(FullyConnectedAutoencoder, self).__init__()
        self.transmitter = nn.Sequential(
            nn.Linear(in_features=2 ** k, out_features=2 ** k, bias=True),
            nn.ReLU(inplace=True),
            nn.Linear(in_features=2 ** k, out_features=n_channel, bias=True) )
        self.receiver = nn.Sequential(
            nn.Linear(in_features=n_channel, out_features=2 ** k, bias=True),
            nn.ReLU(inplace=True),
            nn.Linear(in_features=2 ** k, out_features=2 ** k, bias=True),)

    def forward(self, x):

        x = self.transmitter(x)
        # Normalization
        n = (x.norm(dim=-1)[:,None].view(-1,1).expand_as(x))
        x = sqrt(7)*(x / n)
        training_SNR = 10 ** (self.EbN0_dB / 10)  # Train at 3 dB
        R = k / n_channel
        noise = torch.randn(x.size()) / ((2*R*training_SNR) ** 0.5)
        x += noise

        x = self.receiver(x)
        return x
我的训练循环如下:

# TRAINING
for epoch in range(epochs):
    for step, (x, y) in enumerate(trainloader):  # gives batch data, normalize x when iterate train_loader

        # Forward pass
        output = net(x)  # output
        y = (y.long()).view(-1)
        loss = loss_func(output, y)  # cross entropy loss

        # Backward and optimize
        optimizer.zero_grad()  # clear gradients for this training step
        loss.backward()  # backpropagation, compute gradients
        optimizer.step()  # apply gradients

        if step % 100 == 0:
            train_output = net(train_data)
            pred_labels = torch.max(train_output, 1)[1].data.squeeze()
            accuracy = sum(pred_labels == train_labels) / float(train_labels.size(0))
            print('Epoch: ', epoch, '| train loss: %.4f' % loss.item(), '| train accuracy: %.4f' % accuracy)
训练循环运行良好。然而,我想在不同的信噪比下测试我的方法。我在做那件事时遇到了一些问题。以下是我正在尝试的两种方法

方法1:每次测试autoencoder时声明一个新对象

for p in range(len(EbNo_test)):
with torch.no_grad():
    for test_data, test_labels in testloader:  

        net = FullyConnectedAutoencoder(k, n_channel, EbNo_test[p])
        decoded_signal = net(test_data)

        # encoded_signal = net.transmitter(test_data) 
        # noisy_signal = encoded_signal + test_noise
        # decoded_signal =  net.receiver(noisy_signal)


        pred_labels = torch.max(decoded_signal, 1)[1].data.squeeze()
        test_BLER[p] = sum(pred_labels != test_labels) / float(test_labels.size(0))

print('Eb/N0:',EbNo_test[p].numpy(), '| test BLER: %.4f' % test_BLER[p])
方法2:这更直观。分别使用发射器和接收器部分,在我发送信号后添加噪声

for p in range(len(EbNo_test)):
    EcNo_test_sqrt[p] = 1/(2*R*(10**(EbNo_test[p]/20)))
    test_noise = EcNo_test_sqrt[p] * torch.randn(batch_size, n_channel)
    with torch.no_grad():
        for test_data, test_labels in testloader:  

            encoded_signal = net.transmitter(test_data) 
            noisy_signal = encoded_signal + test_noise
            decoded_signal =  net.receiver(noisy_signal)

            pred_labels = torch.max(decoded_signal, 1)[1].data.squeeze()
            test_BLER[p] = sum(pred_labels != test_labels) / float(test_labels.size(0))

    print('Eb/N0:',EbNo_test[p].numpy(), '| test BLER: %.4f' % test_BLER[p])
奇怪的是,我得到了错误的答案——这意味着错误率为90%,因为他们应该遵循这样的趋势。(上面引用的论文中的图3b)

我做错什么了吗?非常感谢您的帮助