Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/320.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 火炬网络负载未正确处理_Python_Pytorch - Fatal编程技术网

Python 火炬网络负载未正确处理

Python 火炬网络负载未正确处理,python,pytorch,Python,Pytorch,我试图在pytorch环境中使用3x64x64映像创建一个网络,似乎我成功地训练了我的网络并保存了它。网络看起来像: class LC_small(nn.Module): def __init__(self,c_in,c_out = 256): super(LC_small,self).__init__() self.conv1 = conv(c_in,64,k=3,stride=1,pad=1) self.conv2 = conv(64, 128

我试图在pytorch环境中使用3x64x64映像创建一个网络,似乎我成功地训练了我的网络并保存了它。网络看起来像:

class LC_small(nn.Module):
    def __init__(self,c_in,c_out = 256):
    super(LC_small,self).__init__()
        self.conv1 = conv(c_in,64,k=3,stride=1,pad=1)
        self.conv2 = conv(64, 128, k=3, stride=2, pad=1)
        self.conv3 = conv(128, 128, k=3, stride=1, pad=1)
        self.conv4 = conv(128, 128, k=3, stride=2, pad=1)
        self.conv5 = conv(128, 128, k=3, stride=1, pad=1)
        self.conv6 = conv(128, 256, k=3, stride=2, pad=1)
        self.conv7 = conv(256, 256, k=3, stride=1, pad=1)# int(h/8 x w/8 x 256)
        self.flat = dense(int(w_rsz/8)*int(h_rsz/8)*256,256)
        self.dense1 = dense(256,128,False)
        self.dense2 = dense(128,3,False)
    def forward(self, input):
        out = self.conv1(input)
        out = self.conv2(out)
        out = self.conv3(out)
        out = self.conv4(out)
        out = self.conv5(out)
        out = self.conv6(out)
        out = self.conv7(out)
        out = out.view(out.size(0),-1)
        out = self.flat(out)
        out = self.dense1(out)
        out = self.dense2(out)
         # print(out.shape)
        normal = torch.nn.functional.normalize(out, 2, 1)

        return normal
我在培训时保存了我的模型:

for epoch in range(10):
#  continue    # 현재 Training 됐다고 가정하고
    total_loss = 0
    route_param = open(route_diffuse+'/netparam.txt','w')
    for param in lcnet.state_dict():
    route_param.write(str(param)+'\t'+str(lcnet.state_dict()[param].size())+'\n')
    for i,data in enumerate(load_LC,0):
    input, gtval = data[0].to(dev),data[1].to(dev)
    opt.zero_grad()

    output = lcnet(input)
    loss = crit(output,gtval)
    loss.backward()
    opt.step()
    total_loss +=loss.item()
    if i%10 == 9:
         print(epoch,i,total_loss/10)
         torch.save(lcnet,route_save)
         total_loss = 0
但是,当我尝试加载我创建的网络时,我看到一条错误消息,如下所示:

Traceback (most recent call last):

File "E:/DLPrj/venv/torch_practice.py", line 324, in <module>

ipl,npl = getseqi_np(sq_t,lcnet)   #  data : 8 x 6 x w x h 

File "E:/DLPrj/venv/torch_practice.py", line 133, in getseqi_np

l1 = net_lc(torch.from_numpy(i1r))

File "E:\DLPrj\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__

result = self.forward(*input, **kwargs)

File "E:/DLPrj/venv/torch_practice.py", line 216, in forward

out = self.conv1(input)

File "E:\DLPrj\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__

result = self.forward(*input, **kwargs)

File "E:\DLPrj\venv\lib\site-packages\torch\nn\modules\container.py", line 92, in forward

input = module(input)

File "E:\DLPrj\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__

result = self.forward(*input, **kwargs)

File "E:\DLPrj\venv\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward

return self.conv2d_forward(input, self.weight)

File "E:\DLPrj\venv\lib\site-packages\torch\nn\modules\conv.py", line 342, in conv2d_forward

self.padding, self.dilation, self.groups)

RuntimeError: Expected 4-dimensional input for 4-dimensional weight 64 3 3 3, but got 3-dimensional input of size [64, 64, 3] instead

我无法理解为什么网络的输入大小突然改变,或者为什么它错误地保存了我的网络。请检查我的问题,非常感谢。

您的第一条错误消息是因为torch.from_numpy(i1r)的形状错误。你需要做什么

np.expand_dims(i1r.transpose(2,0,1), axis=0) 
然后它会得到正确的处理。这是因为它需要一个批处理维度,而您没有在通道位于第一个维度而不是最后一个维度时提供一个批处理维度


至于第二条错误消息,可能是因为您错误地定义了conv,因此在保存模型时会出错

所以您的第一条错误消息是因为torch.from_numpy(i1r)的形状错误。你需要做什么

np.expand_dims(i1r.transpose(2,0,1), axis=0) 
然后它会得到正确的处理。这是因为它需要一个批处理维度,而您没有在通道位于第一个维度而不是最后一个维度时提供一个批处理维度


至于第二条错误消息,可能是因为您错误地定义了conv,因此在保存模型时会出错

您没有显示如何保存模型的代码,因此我们无法真正帮助您调试此问题。哦,抱歉。我在中间添加了我的保存代码。你没有显示你如何保存模型的代码,所以我们不能真正帮助你调试这个问题。我在中间添加了我的保存代码。