Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/346.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Pytorch:具有交叉熵的多目标错误_Python_Pytorch_Conv Neural Network_Google Colaboratory_Cross Entropy - Fatal编程技术网

Python Pytorch:具有交叉熵的多目标错误

Python Pytorch:具有交叉熵的多目标错误,python,pytorch,conv-neural-network,google-colaboratory,cross-entropy,Python,Pytorch,Conv Neural Network,Google Colaboratory,Cross Entropy,所以我在训练一个神经网络。以下是基本细节: 原始标签尺寸=火炬尺寸([64,1]) 净尺寸的输出=火炬尺寸([64,2]) 损失类型=nn.CrossEntropyLoss() error=RuntimeError:at/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15不支持多目标 我错在哪里? 培训: EPOCHS = 5 LEARNING_RATE = 0.0001 BATCH_SIZE = 64 net

所以我在训练一个神经网络。以下是基本细节:

  • 原始标签尺寸=火炬尺寸([64,1])
  • 净尺寸的输出=火炬尺寸([64,2])
  • 损失类型=nn.CrossEntropyLoss()
  • error=RuntimeError:at/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15不支持多目标
我错在哪里?

培训:

EPOCHS        = 5
LEARNING_RATE = 0.0001
BATCH_SIZE    = 64

net = Net().to(device)
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)

loss_log = []
loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
def train(net, train_set, loss_log=[], EPOCHS=5, LEARNING_RATE=0.001, BATCH_SIZE=32):
  print('Initiating Training..')  
  loss_func = nn.CrossEntropyLoss()

  # Iteration Begins
  for epoch in tqdm(range(EPOCHS)):
    # Iterate over every sample in the batch
    for data in tqdm(trainSet, desc=f'Iteration > {epoch+1}/{EPOCHS} : ', leave=False):
        x, y = data
        net.zero_grad()

        #Compute the output
        output, sm = net(x)

        # Compute Train Loss
        loss = loss_func(output, y.to(device))

        # Backpropagate
        loss.backward()

        # Update Parameters
        optimizer.step()

        # LEARNING_RATE -= LEARNING_RATE*0.0005

    loss_log.append(loss)
    lr_log.append(LEARNING_RATE)

  return loss_log, lr_log
---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-20-8deb9a27d3b4> in <module>()
     13 
     14 total_epochs += EPOCHS
---> 15 loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
     16 
     17 plt.plot(loss_log)

4 frames

<ipython-input-9-59e1d2cf0c84> in train(net, train_set, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
     21         # Compute Train Loss
     22         # print(output, y.to(device))
---> 23         loss = loss_func(output, y.to(device))
     24 
     25         # Backpropagate

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
    914     def forward(self, input, target):
    915         return F.cross_entropy(input, target, weight=self.weight,
--> 916                                ignore_index=self.ignore_index, reduction=self.reduction)
    917 
    918 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2019     if size_average is not None or reduce is not None:
   2020         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2021     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2022 
   2023 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   1836                          .format(input.size(0), target.size(0)))
   1837     if dim == 2:
-> 1838         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
   1839     elif dim == 4:
   1840         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
列车功能:

EPOCHS        = 5
LEARNING_RATE = 0.0001
BATCH_SIZE    = 64

net = Net().to(device)
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)

loss_log = []
loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
def train(net, train_set, loss_log=[], EPOCHS=5, LEARNING_RATE=0.001, BATCH_SIZE=32):
  print('Initiating Training..')  
  loss_func = nn.CrossEntropyLoss()

  # Iteration Begins
  for epoch in tqdm(range(EPOCHS)):
    # Iterate over every sample in the batch
    for data in tqdm(trainSet, desc=f'Iteration > {epoch+1}/{EPOCHS} : ', leave=False):
        x, y = data
        net.zero_grad()

        #Compute the output
        output, sm = net(x)

        # Compute Train Loss
        loss = loss_func(output, y.to(device))

        # Backpropagate
        loss.backward()

        # Update Parameters
        optimizer.step()

        # LEARNING_RATE -= LEARNING_RATE*0.0005

    loss_log.append(loss)
    lr_log.append(LEARNING_RATE)

  return loss_log, lr_log
---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-20-8deb9a27d3b4> in <module>()
     13 
     14 total_epochs += EPOCHS
---> 15 loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
     16 
     17 plt.plot(loss_log)

4 frames

<ipython-input-9-59e1d2cf0c84> in train(net, train_set, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
     21         # Compute Train Loss
     22         # print(output, y.to(device))
---> 23         loss = loss_func(output, y.to(device))
     24 
     25         # Backpropagate

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
    914     def forward(self, input, target):
    915         return F.cross_entropy(input, target, weight=self.weight,
--> 916                                ignore_index=self.ignore_index, reduction=self.reduction)
    917 
    918 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2019     if size_average is not None or reduce is not None:
   2020         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2021     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2022 
   2023 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   1836                          .format(input.size(0), target.size(0)))
   1837     if dim == 2:
-> 1838         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
   1839     elif dim == 4:
   1840         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
完全错误:

EPOCHS        = 5
LEARNING_RATE = 0.0001
BATCH_SIZE    = 64

net = Net().to(device)
optimizer = optim.Adam(net.parameters(), lr=LEARNING_RATE)

loss_log = []
loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
def train(net, train_set, loss_log=[], EPOCHS=5, LEARNING_RATE=0.001, BATCH_SIZE=32):
  print('Initiating Training..')  
  loss_func = nn.CrossEntropyLoss()

  # Iteration Begins
  for epoch in tqdm(range(EPOCHS)):
    # Iterate over every sample in the batch
    for data in tqdm(trainSet, desc=f'Iteration > {epoch+1}/{EPOCHS} : ', leave=False):
        x, y = data
        net.zero_grad()

        #Compute the output
        output, sm = net(x)

        # Compute Train Loss
        loss = loss_func(output, y.to(device))

        # Backpropagate
        loss.backward()

        # Update Parameters
        optimizer.step()

        # LEARNING_RATE -= LEARNING_RATE*0.0005

    loss_log.append(loss)
    lr_log.append(LEARNING_RATE)

  return loss_log, lr_log
---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-20-8deb9a27d3b4> in <module>()
     13 
     14 total_epochs += EPOCHS
---> 15 loss_log = train(net, trainSet, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
     16 
     17 plt.plot(loss_log)

4 frames

<ipython-input-9-59e1d2cf0c84> in train(net, train_set, loss_log, EPOCHS, LEARNING_RATE, BATCH_SIZE)
     21         # Compute Train Loss
     22         # print(output, y.to(device))
---> 23         loss = loss_func(output, y.to(device))
     24 
     25         # Backpropagate

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    530             result = self._slow_forward(*input, **kwargs)
    531         else:
--> 532             result = self.forward(*input, **kwargs)
    533         for hook in self._forward_hooks.values():
    534             hook_result = hook(self, input, result)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py in forward(self, input, target)
    914     def forward(self, input, target):
    915         return F.cross_entropy(input, target, weight=self.weight,
--> 916                                ignore_index=self.ignore_index, reduction=self.reduction)
    917 
    918 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2019     if size_average is not None or reduce is not None:
   2020         reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2021     return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
   2022 
   2023 

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   1836                          .format(input.size(0), target.size(0)))
   1837     if dim == 2:
-> 1838         ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
   1839     elif dim == 4:
   1840         ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
---------------------------------------------------------------------------
运行时错误回溯(上次最近调用)
在()
13
14总学时+=学时
--->15损失日志=列车(净、列车组、损失日志、时代、学习率、批量大小)
16
17 plt.绘图(损失日志)
4帧
列车内(网络、列车组、丢失日志、年代、学习率、批量大小)
21#计算列车损失
22#打印(输出,y到(设备))
--->23损耗=损耗函数(输出,y到(设备))
24
25#反向传播
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in_u_________(self,*input,**kwargs)
530结果=self.\u slow\u forward(*输入,**kwargs)
531其他:
-->532结果=自我转发(*输入,**kwargs)
533用于钩住自身。\u向前\u钩住.values():
534钩子结果=钩子(自身、输入、结果)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py前进(self、input、target)
914 def前进(自身、输入、目标):
915返回F.交叉熵(输入,目标,重量=自身重量,
-->916忽略索引=自我。忽略索引,减少=自我。减少)
917
918
/交叉熵中的usr/local/lib/python3.6/dist-packages/torch/nn/functional.py(输入、目标、权重、大小、平均值、忽略指数、减少、减少)
2019年,如果大小_平均值不是无或减少值不是无:
2020缩减=\u缩减。旧版\u获取\u字符串(大小\u平均值,缩减)
->2021返回nll_损失(log_softmax(输入,1),目标,重量,无,忽略指数,无,减少)
2022
2023
/nll_损耗中的usr/local/lib/python3.6/dist-packages/torch/nn/functional.py(输入、目标、重量、尺寸平均值、忽略索引、减少、减少)
1836.格式(input.size(0)、target.size(0)))
1837如果尺寸=2:
->1838 ret=torch.\u C.\u nn.nll\u损失(输入、目标、重量、减少、获取枚举(减少)、忽略索引)
1839 elif dim==4:
1840 ret=torch.\u C.\u nn.nll\u loss2d(输入、目标、权重、缩减、获取枚举(缩减)、忽略索引)
运行时错误:在/pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15不支持多目标

您自己编写了问题:

original label dim = torch.Size([64, 1]) <-- [0] or [1]
output from the net dim = torch.Size([64, 2]) <-- [0,1] or [1,0]

original label dim=torch.Size([64,1])您自己解决了这个问题:

original label dim = torch.Size([64, 1]) <-- [0] or [1]
output from the net dim = torch.Size([64, 2]) <-- [0,1] or [1,0]

original label dim=torch.Size([64,1])问题在于,目标张量是二维的(
[64,1]
,而不是
[64]
),这使得PyTorch认为每个数据都有一个以上的基本真相标签。这可以通过
loss\u func(输出,y.flatte().to(设备))
轻松解决。希望这有帮助

问题在于,目标张量是二维的(
[64,1]
,而不是
[64]
),这使得Pytork认为每个数据都有一个以上的基本真相标签。这可以通过
loss\u func(输出,y.flatte().to(设备))
轻松解决。希望这有帮助

可能是您如何定义
y
的问题。如果
x
是一个大小
[B,C]
张量,那么
y
应该是一个大小
[B]
的长传感器。y已经是一个长传感器了。这就是我在创建正标签时定义y的方式:y=torch.Tensor([1]).long(){注意:我在此之后创建了批处理,因此前面提到的形状}如果x和y的大小不一致,则会出现此特定错误,因此请查看
x.size()
y.size()
在调用损失函数之前。可能是您如何定义
y
的问题。如果
x
是一个大小
[B,C]
张量,那么
y
应该是一个大小
[B]
的长传感器。y已经是一个长传感器了。这就是我在创建正标签时定义y的方式:y=torch.Tensor([1]).long(){注意:我在此之后创建了批处理,因此前面提到的形状}如果x和y的大小不一致,则会出现此特定错误,因此请查看
x.size()
y.size()
在调用loss函数之前。我认为我不需要做所有这些事情,因为我使用的是CrossEntropyLoss,它自己做所有这些事情。无论如何,我会再次检查和验证。我不认为我需要做所有这些事情,因为我使用的是CrossEntropyLoss,它自己做所有这些事情。无论如何,我会再次检查和验证。我相信y.squeeze(1)也会做这个把戏。我相信y.squeeze(1)也会做这个把戏