Deep learning 自动编码器尺寸的结果不正确

Deep learning 自动编码器尺寸的结果不正确,deep-learning,pytorch,autoencoder,Deep Learning,Pytorch,Autoencoder,使用以下代码,我试图将mnist中的图像编码为低维表示: import warnings warnings.filterwarnings('ignore') import numpy as np import matplotlib.pyplot as plt import pandas as pd from matplotlib import pyplot as plt from sklearn import metrics import datetime from sklearn.prepro

使用以下代码,我试图将mnist中的图像编码为低维表示:

import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import pyplot as plt
from sklearn import metrics
import datetime
from sklearn.preprocessing import MultiLabelBinarizer
import seaborn as sns
sns.set_style("darkgrid")
from ast import literal_eval
import numpy as np
from sklearn.preprocessing import scale
import seaborn as sns
sns.set_style("darkgrid")
import torch
import torch
import torchvision
import torch.nn as nn
from torch.autograd import Variable

%matplotlib inline

low_dim_rep = 32
epochs = 2

cuda = torch.cuda.is_available() # True if cuda is available, False otherwise
FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
print('Training on %s' % ('GPU' if cuda else 'CPU'))

# Loading the MNIST data set
transform = torchvision.transforms.Compose([torchvision.transforms.ToTensor(),
                torchvision.transforms.Normalize((0.1307,), (0.3081,))])
mnist = torchvision.datasets.MNIST(root='../data/', train=True, transform=transform, download=True)

# Loader to feed the data batch by batch during training.
batch = 100
data_loader = torch.utils.data.DataLoader(mnist, batch_size=batch, shuffle=True)


encoder = nn.Sequential(
                # Encoder
                nn.Linear(28 * 28, 64),
                nn.PReLU(64),
                nn.BatchNorm1d(64),

                # Low-dimensional representation
                nn.Linear(64, low_dim_rep),
                nn.PReLU(low_dim_rep),
                nn.BatchNorm1d(low_dim_rep))

decoder = nn.Sequential(
                # Decoder
                nn.Linear(low_dim_rep, 64),
                nn.PReLU(64),
                nn.BatchNorm1d(64),
                nn.Linear(64, 28 * 28))

autoencoder = nn.Sequential(encoder, decoder)

encoder = encoder.type(FloatTensor)
decoder = decoder.type(FloatTensor)
autoencoder = autoencoder.type(FloatTensor)

optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.00001)


data_size = int(mnist.train_labels.size()[0])

print('data_size' , data_size)
for i in range(epochs):
    for j, (images, _) in enumerate(data_loader):
        images = images.view(images.size(0), -1) # from (batch 1, 28, 28) to (batch, 28, 28)
        images = Variable(images).type(FloatTensor)

        autoencoder.zero_grad()
        reconstructions = autoencoder(images)
        loss = torch.dist(images, reconstructions)
        loss.backward()
        optimizer.step()
    print('Epoch %i/%i loss %.2f' % (i + 1, epochs, loss.data[0]))

print('Optimization finished.')

# Get the encoded images here
encoded_images = []
for j, (images, _) in enumerate(data_loader):
    images = images.view(images.size(0), -1) 
    images = Variable(images).type(FloatTensor)

    encoded_images.append(encoder(images))
完成此代码后

len(encoded_images)
是600,我希望长度与mnist中的图像数量匹配:
len(mnist)
-60'000


如何将图像编码为32(
low\u dim\u rep=32
)的低维表示?我定义的网络参数不正确?

您在
mnist
中有
60000个
图像,并且您的
批处理=100
。这就是为什么您的
len(encoded_images)=600
,因为您在生成编码图像时进行
60000/100=600
迭代。最后是一个
600
元素的列表,其中每个元素都有形状
[100,32]
。您可以执行以下操作

encoded_images = torch.zeros(len(mnist), 32)
for j, (images, _) in enumerate(data_loader):
    images = images.view(images.size(0), -1) 
    images = Variable(images).type(FloatTensor)
    encoded_images[j * batch : (j+1) * batch] = encoder(images)

谢谢,但是使用上面的代码会返回错误:--------------------------------------------------------------------------------------RuntimeError Traceback(最近一次调用last)in()3 images=images.view(images.size(0),-1)4 images=Variable(images.type)(FloatTensor)--->5个编码的\u images[j*batch:(j+1)*batch]=编码器(图像)运行时错误:张量(32)的扩展大小必须与非单态维度1的现有大小(4)匹配是的,现在可以工作了。感谢分享。根据您的解释,这也可以工作:“l=[]对于编码的_图像中的i:i中的ii:l.append(ii)”