Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 多类分类-运行时错误:需要1D目标张量,不支持多目标_Python 3.x_Runtime Error_Pytorch - Fatal编程技术网

Python 3.x 多类分类-运行时错误:需要1D目标张量,不支持多目标

Python 3.x 多类分类-运行时错误:需要1D目标张量,不支持多目标,python-3.x,runtime-error,pytorch,Python 3.x,Runtime Error,Pytorch,我的目标是使用Pytorch并基于EMNIST数据集(字母的黑白图片)构建一个多类图像分类器 我的训练数据X_train的形状是(124800,28,28) 原始目标变量y_train的形状是(124800,1),但是我创建了一个单热编码,所以现在形状是(124800,26) 我正在构建的模型应该有26个输出变量,每个变量代表一个字母的概率 我的数据如下: import scipy .io emnist = scipy.io.loadmat(DATA_DIR + '/emnist-letters

我的目标是使用Pytorch并基于EMNIST数据集(字母的黑白图片)构建一个多类图像分类器

我的训练数据X_train的形状是(124800,28,28)

原始目标变量
y_train
的形状是(124800,1),但是我创建了一个单热编码,所以现在形状是(124800,26)

我正在构建的模型应该有26个输出变量,每个变量代表一个字母的概率

我的数据如下:

import scipy .io
emnist = scipy.io.loadmat(DATA_DIR + '/emnist-letters.mat')
data = emnist ['dataset']
X_train = data ['train'][0, 0]['images'][0, 0]
X_train = X_train.reshape((-1,28,28), order='F')

y_train = data ['train'][0, 0]['labels'][0, 0]
y_train_one_hot = np.zeros([len(y_train), 27])

for i in range (0, len(y_train)):
    y_train_one_hot[i, y_train[i][0]] = 1
    
y_train_one_hot = np.delete(y_train_one_hot, 0, 1)
class CNNModel(nn.Module):
    
    def __init__(self):
        super(CNNModel, self).__init__()
        
        # Convolution 1
        self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=0)
        self.relu1 = nn.ReLU()
        
        # Max pool 1
        self.maxpool1 = nn.MaxPool2d(2,2)
     
        # Convolution 2
        self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=0)
        self.relu2 = nn.ReLU()
        
        # Max pool 2
        self.maxpool2 = nn.MaxPool2d(kernel_size=2)
        
        # Fully connected 1 (readout)
        self.fc1 = nn.Linear(32 * 4 * 4, 26) 

    def forward(self, x):
        # Convolution 1
        out = self.cnn1(x.float())
        out = self.relu1(out)
        
        # Max pool 1
        out = self.maxpool1(out)
        
        # Convolution 2 
        out = self.cnn2(out)
        out = self.relu2(out)
        
        # Max pool 2 
        out = self.maxpool2(out)
        
        # Resize
        # Original size: (100, 32, 7, 7)
        # out.size(0): 100
        # New out size: (100, 32*7*7)
        out = out.view(out.size(0), -1)

        # Linear function (readout)
        out = self.fc1(out)
        
        return out

model = CNNModel()

criterion = nn.CrossEntropyLoss()

learning_rate = 0.01

optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate)
iter = 0
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        
        # Add a single channel dimension
        # From: [batch_size, height, width]
        # To: [batch_size, 1, height, width]
        images = images.unsqueeze(1)

        # Forward pass to get output/logits
        outputs = model(images)
        
        # Clear gradients w.r.t. parameters
        optimizer.zero_grad()
        
        # Forward pass to get output/logits
        outputs = model(images)

        # Calculate Loss: softmax --> cross entropy loss
        loss = criterion(outputs, labels)
        
        # Getting gradients w.r.t. parameters
        loss.backward()
        
        # Updating parameters
        optimizer.step()
        
        iter += 1
        
        if iter % 500 == 0:
            # Calculate Accuracy         
            correct = 0
            total = 0
            # Iterate through test dataset
            for images, labels in test_loader:
               
                images = images.unsqueeze(1)
                
                # Forward pass only to get logits/output
                outputs = model(images)
                
                # Get predictions from the maximum value
                _, predicted = torch.max(outputs.data, 1)
                
                # Total number of labels
                total += labels.size(0)
                
                correct += (predicted == labels).sum()
            
            accuracy = 100 * correct / total
            
            # Print Loss
            print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.data[0], accuracy))
然后,我创建了一个热编码,如下所示:

import scipy .io
emnist = scipy.io.loadmat(DATA_DIR + '/emnist-letters.mat')
data = emnist ['dataset']
X_train = data ['train'][0, 0]['images'][0, 0]
X_train = X_train.reshape((-1,28,28), order='F')

y_train = data ['train'][0, 0]['labels'][0, 0]
y_train_one_hot = np.zeros([len(y_train), 27])

for i in range (0, len(y_train)):
    y_train_one_hot[i, y_train[i][0]] = 1
    
y_train_one_hot = np.delete(y_train_one_hot, 0, 1)
class CNNModel(nn.Module):
    
    def __init__(self):
        super(CNNModel, self).__init__()
        
        # Convolution 1
        self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=0)
        self.relu1 = nn.ReLU()
        
        # Max pool 1
        self.maxpool1 = nn.MaxPool2d(2,2)
     
        # Convolution 2
        self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=0)
        self.relu2 = nn.ReLU()
        
        # Max pool 2
        self.maxpool2 = nn.MaxPool2d(kernel_size=2)
        
        # Fully connected 1 (readout)
        self.fc1 = nn.Linear(32 * 4 * 4, 26) 

    def forward(self, x):
        # Convolution 1
        out = self.cnn1(x.float())
        out = self.relu1(out)
        
        # Max pool 1
        out = self.maxpool1(out)
        
        # Convolution 2 
        out = self.cnn2(out)
        out = self.relu2(out)
        
        # Max pool 2 
        out = self.maxpool2(out)
        
        # Resize
        # Original size: (100, 32, 7, 7)
        # out.size(0): 100
        # New out size: (100, 32*7*7)
        out = out.view(out.size(0), -1)

        # Linear function (readout)
        out = self.fc1(out)
        
        return out

model = CNNModel()

criterion = nn.CrossEntropyLoss()

learning_rate = 0.01

optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate)
iter = 0
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        
        # Add a single channel dimension
        # From: [batch_size, height, width]
        # To: [batch_size, 1, height, width]
        images = images.unsqueeze(1)

        # Forward pass to get output/logits
        outputs = model(images)
        
        # Clear gradients w.r.t. parameters
        optimizer.zero_grad()
        
        # Forward pass to get output/logits
        outputs = model(images)

        # Calculate Loss: softmax --> cross entropy loss
        loss = criterion(outputs, labels)
        
        # Getting gradients w.r.t. parameters
        loss.backward()
        
        # Updating parameters
        optimizer.step()
        
        iter += 1
        
        if iter % 500 == 0:
            # Calculate Accuracy         
            correct = 0
            total = 0
            # Iterate through test dataset
            for images, labels in test_loader:
               
                images = images.unsqueeze(1)
                
                # Forward pass only to get logits/output
                outputs = model(images)
                
                # Get predictions from the maximum value
                _, predicted = torch.max(outputs.data, 1)
                
                # Total number of labels
                total += labels.size(0)
                
                correct += (predicted == labels).sum()
            
            accuracy = 100 * correct / total
            
            # Print Loss
            print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.data[0], accuracy))
我使用以下方法创建数据集:

train_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_train), torch.from_numpy(y_train_one_hot))

batch_size = 128
n_iters = 3000
num_epochs = n_iters / (len(train_dataset) / batch_size)
num_epochs = int(num_epochs)

train_loader = torch.utils.data.DataLoader(dataset=train_dataset, 
                                           batch_size=batch_size, 
                                           shuffle=True)
然后我建立了我的模型,如下所示:

import scipy .io
emnist = scipy.io.loadmat(DATA_DIR + '/emnist-letters.mat')
data = emnist ['dataset']
X_train = data ['train'][0, 0]['images'][0, 0]
X_train = X_train.reshape((-1,28,28), order='F')

y_train = data ['train'][0, 0]['labels'][0, 0]
y_train_one_hot = np.zeros([len(y_train), 27])

for i in range (0, len(y_train)):
    y_train_one_hot[i, y_train[i][0]] = 1
    
y_train_one_hot = np.delete(y_train_one_hot, 0, 1)
class CNNModel(nn.Module):
    
    def __init__(self):
        super(CNNModel, self).__init__()
        
        # Convolution 1
        self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=0)
        self.relu1 = nn.ReLU()
        
        # Max pool 1
        self.maxpool1 = nn.MaxPool2d(2,2)
     
        # Convolution 2
        self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=0)
        self.relu2 = nn.ReLU()
        
        # Max pool 2
        self.maxpool2 = nn.MaxPool2d(kernel_size=2)
        
        # Fully connected 1 (readout)
        self.fc1 = nn.Linear(32 * 4 * 4, 26) 

    def forward(self, x):
        # Convolution 1
        out = self.cnn1(x.float())
        out = self.relu1(out)
        
        # Max pool 1
        out = self.maxpool1(out)
        
        # Convolution 2 
        out = self.cnn2(out)
        out = self.relu2(out)
        
        # Max pool 2 
        out = self.maxpool2(out)
        
        # Resize
        # Original size: (100, 32, 7, 7)
        # out.size(0): 100
        # New out size: (100, 32*7*7)
        out = out.view(out.size(0), -1)

        # Linear function (readout)
        out = self.fc1(out)
        
        return out

model = CNNModel()

criterion = nn.CrossEntropyLoss()

learning_rate = 0.01

optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate)
iter = 0
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        
        # Add a single channel dimension
        # From: [batch_size, height, width]
        # To: [batch_size, 1, height, width]
        images = images.unsqueeze(1)

        # Forward pass to get output/logits
        outputs = model(images)
        
        # Clear gradients w.r.t. parameters
        optimizer.zero_grad()
        
        # Forward pass to get output/logits
        outputs = model(images)

        # Calculate Loss: softmax --> cross entropy loss
        loss = criterion(outputs, labels)
        
        # Getting gradients w.r.t. parameters
        loss.backward()
        
        # Updating parameters
        optimizer.step()
        
        iter += 1
        
        if iter % 500 == 0:
            # Calculate Accuracy         
            correct = 0
            total = 0
            # Iterate through test dataset
            for images, labels in test_loader:
               
                images = images.unsqueeze(1)
                
                # Forward pass only to get logits/output
                outputs = model(images)
                
                # Get predictions from the maximum value
                _, predicted = torch.max(outputs.data, 1)
                
                # Total number of labels
                total += labels.size(0)
                
                correct += (predicted == labels).sum()
            
            accuracy = 100 * correct / total
            
            # Print Loss
            print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.data[0], accuracy))
然后我对模型进行如下训练:

import scipy .io
emnist = scipy.io.loadmat(DATA_DIR + '/emnist-letters.mat')
data = emnist ['dataset']
X_train = data ['train'][0, 0]['images'][0, 0]
X_train = X_train.reshape((-1,28,28), order='F')

y_train = data ['train'][0, 0]['labels'][0, 0]
y_train_one_hot = np.zeros([len(y_train), 27])

for i in range (0, len(y_train)):
    y_train_one_hot[i, y_train[i][0]] = 1
    
y_train_one_hot = np.delete(y_train_one_hot, 0, 1)
class CNNModel(nn.Module):
    
    def __init__(self):
        super(CNNModel, self).__init__()
        
        # Convolution 1
        self.cnn1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=0)
        self.relu1 = nn.ReLU()
        
        # Max pool 1
        self.maxpool1 = nn.MaxPool2d(2,2)
     
        # Convolution 2
        self.cnn2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5, stride=1, padding=0)
        self.relu2 = nn.ReLU()
        
        # Max pool 2
        self.maxpool2 = nn.MaxPool2d(kernel_size=2)
        
        # Fully connected 1 (readout)
        self.fc1 = nn.Linear(32 * 4 * 4, 26) 

    def forward(self, x):
        # Convolution 1
        out = self.cnn1(x.float())
        out = self.relu1(out)
        
        # Max pool 1
        out = self.maxpool1(out)
        
        # Convolution 2 
        out = self.cnn2(out)
        out = self.relu2(out)
        
        # Max pool 2 
        out = self.maxpool2(out)
        
        # Resize
        # Original size: (100, 32, 7, 7)
        # out.size(0): 100
        # New out size: (100, 32*7*7)
        out = out.view(out.size(0), -1)

        # Linear function (readout)
        out = self.fc1(out)
        
        return out

model = CNNModel()

criterion = nn.CrossEntropyLoss()

learning_rate = 0.01

optimizer = torch.optim.SGD(model.parameters(), lr = learning_rate)
iter = 0
for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):
        
        # Add a single channel dimension
        # From: [batch_size, height, width]
        # To: [batch_size, 1, height, width]
        images = images.unsqueeze(1)

        # Forward pass to get output/logits
        outputs = model(images)
        
        # Clear gradients w.r.t. parameters
        optimizer.zero_grad()
        
        # Forward pass to get output/logits
        outputs = model(images)

        # Calculate Loss: softmax --> cross entropy loss
        loss = criterion(outputs, labels)
        
        # Getting gradients w.r.t. parameters
        loss.backward()
        
        # Updating parameters
        optimizer.step()
        
        iter += 1
        
        if iter % 500 == 0:
            # Calculate Accuracy         
            correct = 0
            total = 0
            # Iterate through test dataset
            for images, labels in test_loader:
               
                images = images.unsqueeze(1)
                
                # Forward pass only to get logits/output
                outputs = model(images)
                
                # Get predictions from the maximum value
                _, predicted = torch.max(outputs.data, 1)
                
                # Total number of labels
                total += labels.size(0)
                
                correct += (predicted == labels).sum()
            
            accuracy = 100 * correct / total
            
            # Print Loss
            print('Iteration: {}. Loss: {}. Accuracy: {}'.format(iter, loss.data[0], accuracy))
但是,当我运行此操作时,会出现以下错误:

    ---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-11-c26c43bbc32e> in <module>()
     21 
     22         # Calculate Loss: softmax --> cross entropy loss
---> 23         loss = criterion(outputs, labels)
     24 
     25         # Getting gradients w.r.t. parameters

3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
    930     def forward(self, input, target):
    931         return F.cross_entropy(input, target, weight=self.weight,
--> 932                                ignore_index=self.ignore_index, reduction=self.reduction)
    933 
    934 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2315     if size_average is not None or reduce is not None:
   2316         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2317     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2318 
   2319 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   2113                          .format(input.size(0), target.size(0)))
   2114     if dim == 2:
-> 2115         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
   2116     elif dim == 4:
   2117         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

RuntimeError: 1D target tensor expected, multi-target not supported
---------------------------------------------------------------------------
运行时错误回溯(上次最近调用)
在()
21
22#计算损失:softmax-->交叉熵损失
--->23损耗=标准(输出、标签)
24
25#获得梯度w.r.t.参数
3帧
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in_u_________(self,*input,**kwargs)
548结果=self.\u slow\u forward(*输入,**kwargs)
549其他:
-->550结果=自转发(*输入,**kwargs)
551用于钩住自身。\u向前\u钩住.values():
552钩子结果=钩子(自身、输入、结果)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py前进(self、input、target)
930 def前进档(自身、输入、目标):
931返回F.cross_熵(输入、目标、重量=自身重量,
-->932忽略索引=自我。忽略索引,减少=自我。减少)
933
934
/交叉熵中的usr/local/lib/python3.6/dist-packages/torch/nn/functional.py(输入、目标、权重、大小、平均值、忽略指数、减少、减少)
2315如果大小_平均值不是无或减少值不是无:
2316 reduce=\u reduce.legacy\u get\u字符串(大小\u平均值,reduce)
->2317返回nll_损失(log_softmax(输入,1),目标,重量,无,忽略索引,无,减少)
2318
2319
/nll_损耗中的usr/local/lib/python3.6/dist-packages/torch/nn/functional.py(输入、目标、重量、尺寸平均值、忽略索引、减少、减少)
2113.格式(input.size(0)、target.size(0)))
2114如果尺寸=2:
->2115 ret=torch.\u C.\u nn.nll\u损失(输入、目标、重量、减少量、获取枚举(减少量)、忽略索引)
2116 elif dim==4:
2117 ret=torch.\u C.\u nn.nll\u loss2d(输入、目标、权重、减少、获取枚举(减少)、忽略索引)
运行时错误:需要1D目标张量,不支持多目标

我希望我在初始化/使用丢失函数时会出错。我该怎么做才能开始训练我的模型?

如果你使用交叉熵损失,你不应该对你的目标变量y进行一次热编码。 Pyrotch crossentropy只希望类索引作为目标,而不是它们的一个热编码版本

引用该文件:


该标准要求[0,C-1]范围内的类索引作为小批量1D张量的每个值的目标

如果使用交叉熵损失,则不应对目标变量y进行热编码。 Pyrotch crossentropy只希望类索引作为目标,而不是它们的一个热编码版本

引用该文件:

该标准要求[0,C-1]范围内的类索引作为小批量1D张量的每个值的目标