Pytorch Pytork和xFF1A;RuntimeError:维度超出范围(预期范围为[-1,0],但得到1)

Pytorch Pytork和xFF1A;RuntimeError:维度超出范围(预期范围为[-1,0],但得到1),pytorch,pre-trained-model,Pytorch,Pre Trained Model,我想使用pretrain inceptionV3来训练一个100类的模型,但是在我的训练代码中,我得到了一个奇怪的错误,如下所示: 我的培训代码是: step = -1 print_inter=50 val_inter=400 train_size = ceil(len(data_set['train']) / dataloader['train'].batch_size) for epoch in range(50): # train phase exp_lr_schedu

我想使用pretrain inceptionV3来训练一个100类的模型,但是在我的训练代码中,我得到了一个奇怪的错误,如下所示:

我的培训代码是:

step = -1  
print_inter=50
val_inter=400
train_size = ceil(len(data_set['train']) / dataloader['train'].batch_size)
for epoch in range(50):
    # train phase
    exp_lr_scheduler.step(epoch)
    inception_v3.train(True)
    for batch_cnt, data in enumerate(dataloader['train']):
        step += 1
        inception_v3.train(True)

        inputs, labels = data

        inputs = torch.autograd.Variable(inputs.cuda())
        labels = torch.autograd.Variable(torch.from_numpy(np.array(labels)).long().cuda())
        outputs = inception_v3(inputs)
        # zero the parameter gradients
        optimizer.zero_grad()

        outputs = inception_v3(inputs)
        print(inputs.shape)
        print(outputs[0].shape)
        print(outputs[1].shape)
        print(labels.shape)

        loss = criterion(outputs[0], labels)
        loss += criterion(outputs[1], labels)
        outputs = (outputs[0] + outputs[1]) / 2

        _, preds = torch.max(outputs, 1)
        loss.backward()
        optimizer.step()
        # batch loss

        inception_v3.train(False)  # Set model to evaluate mode

        for batch_cnt_val, data_val in enumerate(dataloader['val']):
            # print data
            inputs, labels = data_val

            inputs = Variable(inputs.cuda())
            labels = Variable(torch.from_numpy(np.array(labels)).long().cuda())

            # forward
            outputs = inception_v3(inputs)
            print(inputs.shape)
            print(outputs[0].shape)
            print(outputs[1].shape)
            print(labels.shape)
            loss = criterion(outputs[0], labels)
            loss += criterion(outputs[1], labels)
            outputs = (outputs[0] + outputs[1]) / 2
运行此代码后,得到以下结果:

torch.Size([8, 3, 299, 299])
torch.Size([8, 100])
torch.Size([8, 100])
torch.Size([8])
torch.Size([8, 3, 299, 299])
torch.Size([100])
torch.Size([100])
torch.Size([8])
        ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-21-0cfbe57aae73> in <module>()
         49             print(outputs[1].shape)
         50             print(labels.shape)
    ---> 51             loss = criterion(outputs[0], labels)
         52             loss += criterion(outputs[1], labels)
         53             outputs = (outputs[0] + outputs[1]) / 2

    ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
        489             result = self._slow_forward(*input, **kwargs)
        490         else:
    --> 491             result = self.forward(*input, **kwargs)
        492         for hook in self._forward_hooks.values():
        493             hook_result = hook(self, input, result)

    ~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target)
        757         _assert_no_grad(target)
        758         return F.cross_entropy(input, target, self.weight, self.size_average,
    --> 759                                self.ignore_index, self.reduce)
        760 
        761 

    ~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce)
       1440         >>> loss.backward()
       1441     """
    -> 1442     return nll_loss(log_softmax(input, 1), target, weight, size_average, ignore_index, reduce)
       1443 
       1444 

    ~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in log_softmax(input, dim, _stacklevel)
        942     if dim is None:
        943         dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
    --> 944     return torch._C._nn.log_softmax(input, dim)
        945 
        946 

    RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)
torch.Size([8,3299299])
火炬尺寸([8100])
火炬尺寸([8100])
火炬尺寸([8])
火炬尺寸([8,3299299])
火炬尺寸([100])
火炬尺寸([100])
火炬尺寸([8])
---------------------------------------------------------------------------
运行时错误回溯(上次最近调用)
在()
49打印(输出[1]。形状)
50打印(标签。形状)
--->51损耗=标准(输出[0],标签)
52损失+=标准(输出[1],标签)
53个输出=(输出[0]+输出[1])/2
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py在调用中(self,*input,**kwargs)
489结果=self.\u slow\u forward(*输入,**kwargs)
490其他:
-->491结果=自我转发(*输入,**kwargs)
492用于钩住自身。\u向前\u钩住.values():
493钩子结果=钩子(自身、输入、结果)
前进中的~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/loss.py(自我、输入、目标)
757 _断言_否_梯度(目标)
758返回F.交叉熵(输入、目标、自身重量、自身大小、平均值、,
-->759自我忽略(自我减少索引)
760
761
交叉熵中的~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py(输入、目标、重量、大小、平均值、忽略指数、减少)
1440>>>丢失。向后()
1441     """
->1442返回nll_损失(log_softmax(输入,1),目标,重量,大小平均值,忽略指数,减少)
1443
1444
日志中的~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py\u softmax(输入、dim、堆栈级别)
942如果dim为无:
943 dim=\u get\u softmax\u dim('log\u softmax',input.dim(),\u stacklevel)
-->944回程焊炬。\u C.\u nn.log\u softmax(输入,尺寸)
945
946
RuntimeError:维度超出范围(预期范围为[-1,0],但得到1)
你可以看到同一个模型得到相同的输入大小,但是我的第二个输出错过了批处理维度