Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/337.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python SSD’;在PyTorch中,s的损失没有减少_Python_Python 3.x_Deep Learning_Pytorch - Fatal编程技术网

Python SSD’;在PyTorch中,s的损失没有减少

Python SSD’;在PyTorch中,s的损失没有减少,python,python-3.x,deep-learning,pytorch,Python,Python 3.x,Deep Learning,Pytorch,我正在实施SSD(单点探测器)以在PyTorch学习。 然而,我的定制训练损失并没有减少。。。 我已经搜索并尝试了一周的各种解决方案,但问题仍然存在 我该怎么办? 我的损失函数不正确 这是我的SSD300型号 SSD300( (feature_layers): ModuleDict( (conv1_1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (relu1_1): ReLU()

我正在实施SSD(单点探测器)以在PyTorch学习。 然而,我的定制训练损失并没有减少。。。 我已经搜索并尝试了一周的各种解决方案,但问题仍然存在

我该怎么办? 我的损失函数不正确

这是我的SSD300型号

SSD300(
  (feature_layers): ModuleDict(
    (conv1_1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu1_1): ReLU()
    (conv1_2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu1_2): ReLU()
    (pool1): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
    (conv2_1): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu2_1): ReLU()
    (conv2_2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu2_2): ReLU()
    (pool2): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
    (conv3_1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu3_1): ReLU()
    (conv3_2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu3_2): ReLU()
    (conv3_3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu3_3): ReLU()
    (pool3): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=True)
    (conv4_1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu4_1): ReLU()
    (conv4_2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu4_2): ReLU()
    (conv4_3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu4_3): ReLU()
    (pool4): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=1, ceil_mode=False)
    (conv5_1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu5_1): ReLU()
    (conv5_2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu5_2): ReLU()
    (conv5_3): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (relu5_3): ReLU()
    (pool5): MaxPool2d(kernel_size=(3, 3), stride=(1, 1), padding=1, dilation=1, ceil_mode=False)
    (conv6): Conv2d(512, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6))
    (relu6): ReLU()
    (conv7): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1))
    (relu7): ReLU()
    (conv8_1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
    (relu8_1): ReLU()
    (conv8_2): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
    (relu8_2): ReLU()
    (conv9_1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))
    (relu9_1): ReLU()
    (conv9_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
    (relu9_2): ReLU()
    (conv10_1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
    (relu10_1): ReLU()
    (conv10_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))
    (relu10_2): ReLU()
    (conv11_1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
    (relu11_1): ReLU()
    (conv11_2): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1))
    (relu11_2): ReLU()
  )
  (localization_layers): ModuleDict(
    (loc1): Sequential(
      (l2norm_loc1): L2Normalization()
      (conv_loc1): Conv2d(512, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_loc1): ReLU()
    )
    (loc2): Sequential(
      (conv_loc2): Conv2d(1024, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_loc2): ReLU()
    )
    (loc3): Sequential(
      (conv_loc3): Conv2d(512, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_loc3): ReLU()
    )
    (loc4): Sequential(
      (conv_loc4): Conv2d(256, 24, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_loc4): ReLU()
    )
    (loc5): Sequential(
      (conv_loc5): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_loc5): ReLU()
    )
    (loc6): Sequential(
      (conv_loc6): Conv2d(256, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_loc6): ReLU()
    )
  )
  (confidence_layers): ModuleDict(
    (conf1): Sequential(
      (l2norm_conf1): L2Normalization()
      (conv_conf1): Conv2d(512, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_conf1): ReLU()
    )
    (conf2): Sequential(
      (conv_conf2): Conv2d(1024, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_conf2): ReLU()
    )
    (conf3): Sequential(
      (conv_conf3): Conv2d(512, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_conf3): ReLU()
    )
    (conf4): Sequential(
      (conv_conf4): Conv2d(256, 126, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_conf4): ReLU()
    )
    (conf5): Sequential(
      (conv_conf5): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_conf5): ReLU()
    )
    (conf6): Sequential(
      (conv_conf6): Conv2d(256, 84, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      (relu_conf6): ReLU()
    )
  )
  (predictor): Predictor()
)
我的损失函数定义为:

SSDLoss类(nn.模块):
def_uuuinit_uuuu(self,alpha=1,matching_func=None,loc_loss=None,conf_loss=None):
super()。\uuuu init\uuuuu()
self.alpha=alpha
self.matching_strategy=如果matching_func不是其他匹配_func,则匹配_策略
self.loc\u loss=本地化损失()如果loc\u损失不是其他loc\u损失
self.conf\u loss=ConfidenceLoss(),如果conf\u loss不是其他conf\u loss
def前进(自身、预测、gts、数据库):
"""
:参数预测:张量,形状为(批量,总体积,4+类体积=(cx,cy,w,h,p类,…)
:参数gts:张量,形状为(批次*bbox\u nums(批次),1+4+类\u nums)=[[img's ind,cx,cy,w,h,p\u class,…],。。
:param dboxes:Tensor,shape is(total_dbox_nums,4=(cx,cy,w,h))
:返回:
损失:浮动
"""
#获得客户的本地化和信心
pred_loc,pred_conf=predicts[:,:,:4],predicts[:,:,4:]
#匹配
位置指示器,gt_loc,gt_conf=自我匹配策略(gts,数据框,批次数量=预测。形状[0],阈值=0.5)
#考虑默认框计算地面真值
gt_loc=gt_loc_转换器(gt_loc,数据盒)
#局部化损失
loc_损失=自身loc_损失(位置指示器、pred_loc、gt_loc)
#信心丧失
conf_loss=self.conf_loss(位置指示器、pred_conf、gt_conf)
返回conf_loss+self.alpha*loc_loss
类本地化损失(nn.模块):
定义初始化(自):
super()。\uuuu init\uuuuu()
self.smoothL1Loss=nn.smoothL1Loss(reduction='none')
def前进(自身、位置指示器、预测、gts):
N=位置指示器总和()
总损耗=自损耗(预测,gts).sum(dim=-1)#shape=(批次数,dboxes数)
损失=总损失。屏蔽选择(位置指示器)
返回loss.sum()/N
类别信心损失(nn.模块):
定义初始值(自,负系数=3):
"""
:param neg_factor:int,为硬负挖掘学习pos和neg的比率(1(pos):neg_factor)
"""
super()。\uuuu init\uuuuu()
self.logsoftmax=nn.logsoftmax(dim=-1)
自负系数=负系数
def前进(自身、位置指示器、预测、gts):
损耗=(-gts*self.logsoftmax(预测)).sum(dim=-1)#形状=(批次数量,dboxes数量)
N=位置指示器总和()
负指示器=火炬。逻辑非(位置指示器)
pos_损失=损失。屏蔽选择(pos_指示器)
负损耗=损耗。屏蔽选择(负指示器)
neg_num=neg_损耗。形状[0]
负数值=最小值(负数值,自负系数*N)
_,topk_指数=torch.topk(负损耗,负数量)
负损失=负损失。指数选择(dim=0,指数=topk指数)
返回(pos_loss.sum()+neg_loss.sum())/N
损失输出低于

Training... Epoch: 1, Iter: 1,   [32/21503   (0%)]  Loss: 28.804445
Training... Epoch: 1, Iter: 10,  [320/21503  (1%)]  Loss: 12.880742
Training... Epoch: 1, Iter: 20,  [640/21503  (3%)]  Loss: 15.932519
Training... Epoch: 1, Iter: 30,  [960/21503  (4%)]  Loss: 14.624641
Training... Epoch: 1, Iter: 40,  [1280/21503     (6%)]  Loss: 16.301014
Training... Epoch: 1, Iter: 50,  [1600/21503     (7%)]  Loss: 15.710087
Training... Epoch: 1, Iter: 60,  [1920/21503     (9%)]  Loss: 12.441727
Training... Epoch: 1, Iter: 70,  [2240/21503     (10%)] Loss: 12.283393
Training... Epoch: 1, Iter: 80,  [2560/21503     (12%)] Loss: 12.272835
Training... Epoch: 1, Iter: 90,  [2880/21503     (13%)] Loss: 12.273635
Training... Epoch: 1, Iter: 100,     [3200/21503     (15%)] Loss: 12.273409
Training... Epoch: 1, Iter: 110,     [3520/21503     (16%)] Loss: 12.266172
Training... Epoch: 1, Iter: 120,     [3840/21503     (18%)] Loss: 12.272820
Training... Epoch: 1, Iter: 130,     [4160/21503     (19%)] Loss: 12.274920
Training... Epoch: 1, Iter: 140,     [4480/21503     (21%)] Loss: 12.275247
Training... Epoch: 1, Iter: 150,     [4800/21503     (22%)] Loss: 12.273258
Training... Epoch: 1, Iter: 160,     [5120/21503     (24%)] Loss: 12.277486
Training... Epoch: 1, Iter: 170,     [5440/21503     (25%)] Loss: 12.266512
Training... Epoch: 1, Iter: 180,     [5760/21503     (27%)] Loss: 12.265674
Training... Epoch: 1, Iter: 190,     [6080/21503     (28%)] Loss: 12.265306
Training... Epoch: 1, Iter: 200,     [6400/21503     (30%)] Loss: 12.269717
Training... Epoch: 1, Iter: 210,     [6720/21503     (31%)] Loss: 12.274122
Training... Epoch: 1, Iter: 220,     [7040/21503     (33%)] Loss: 12.263970
Training... Epoch: 1, Iter: 230,     [7360/21503     (34%)] Loss: 12.267252

在计算损失函数之前,我必须规范化预测框

不一致的话导致了误导。。。

类编码器(nn.模块):
定义uuuu init uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuo(自我,规范意味着=(0,0,0,0,0),规范性传播疾病=(0.1,0.2,0
super()。\uuuu init\uuuuu()
#形状=(1,1,4=(cx,cy,w,h))或(1,1,1)
self.norm\u means=torch.tensor(norm\u means,requires\u grad=False)。unsqueze(0)。unsqueze(0)
self.norm\u stds=torch.tensor(norm\u stds,requires\u grad=False)。unsqueze(0)。unsqueze(0)
def forward(自身、gt_框、默认_框):
"""
:param gt\u框:张量,形状=(批次,默认框数,4)
:param default_box:Tensor,shape=(默认框数,4)
请注意,4表示(cx、cy、w、h)
:返回:
编码框:张量,考虑默认框计算地面真值,公式如下;
gt_-cx=(gt_-cx-dbox-cx)/dbox-w,gt_-cy=(gt_-dbox-cy)/dbox-h,
gt\U w=列车(gt\U w/dbox\U w),gt\U h=列车(gt\U h/dbox\U h)
形状=(批次,默认框数,4)
"""
assert gt_-box.shape[1::==默认_-box.shape,“gt_-box和默认_-box必须是相同的形状”
gt_cx=(gt_-box[:,:,0]-默认_-box[:,0])/默认_-box[:,2]
gt_cy=(gt_-box[:,:,1]-默认_-box[:,1])/默认_-box[:,3]
gt_w=torch.log(gt_框[:,:,2]/默认_框[:,2])
gt_h=torch.log(gt_框[:,:,3]/默认_框[:,3])
编码盒=火炬类别(gt_cx.未编码(2),
gt_cy.unsqueze(2),
gt_w.unsqueze(2),
gt_h.未queze(2)),尺寸=2)
#规范化

return(encoded_box-self.norm_means.to(gt_box.device))/self.norm_stds.to(gt_box.device)在计算损失函数之前,我必须对预测框进行规范化

不一致的话导致了误导。。。

类编码器(nn.模块):
定义uuuu init uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuo(自我,规范意味着=(0,0,0,0,0),规范性传播疾病=(0.1,0.2,0
super()。\uuuu init\uuuuu()
#形状=(1,1,4=(cx,cy,w,h))或(1,1,1)
self.norm\u means=torch.tensor(norm\u means,requires\u grad=False)。unsqueze(0)。unsqueze(0)
self.norm\u stds=torch.tensor(norm\u stds,requires\u grad=False)。unsqueze(0)。unsqueze(0)
def forward(自身、gt_框、默认_框):
"""
:param gt_box:Tensor,shape=(批处理,默认框数,4)
:param default_box:Tensor,shape=(默认框数,4)
请注意,4表示(cx、cy、w、h)
:返回:
编码框:张量,考虑默认框计算地面真值,公式如下;
gt_cx=(g