torch.nn.LSTM运行时错误

torch.nn.LSTM运行时错误,lstm,pytorch,Lstm,Pytorch,我正试图实现“Livelinet:一种用于预测教育视频生动性的多模态深度递归神经网络”中的结构 为了简单的解释,我将10秒的音频剪辑分割成10个1秒的音频剪辑,并从1秒的音频剪辑中得到一个光谱图(图片)。然后我用CNN从图片中得到一个表示向量,最后得到每1秒视频剪辑的10个向量 接下来,我将这10个向量输入到LSTM,我在那里得到了一些错误。 我的代码和错误回溯如下: class AudioCNN(nn.Module): def __init__(self): super(AudioC

我正试图实现“Livelinet:一种用于预测教育视频生动性的多模态深度递归神经网络”中的结构

为了简单的解释,我将10秒的音频剪辑分割成10个1秒的音频剪辑,并从1秒的音频剪辑中得到一个光谱图(图片)。然后我用CNN从图片中得到一个表示向量,最后得到每1秒视频剪辑的10个向量

接下来,我将这10个向量输入到LSTM,我在那里得到了一些错误。 我的代码和错误回溯如下:

class AudioCNN(nn.Module):

def __init__(self):
    super(AudioCNN,self).__init__()
    self.features = alexnet.features
    self.features2 = nn.Sequential(*classifier)
    self.lstm = nn.LSTM(512, 256,2)
    self.classifier = nn.Linear(2*256,2)

def forward(self, x):
    x = self.features(x)
    print x.size()
    x = x.view(x.size(0),256*6*6)
    x = self.features2(x)
    x = x.view(10,1,512)
    h_0,c_0 = self.init_hidden()
    _, (_, _) = self.lstm(x,(h_0,c_0)) # x dim : 2 x 1 x 256
    assert False
    x = x.view(1,1,2*256)
    x = self.classifier(x)

    return x

def init_hidden(self):
    h_0 = torch.randn(2,1,256) #layer * batch * input_dim
    c_0 = torch.randn(2,1,256)
    return h_0, c_0

audiocnn = AudioCNN()
input = torch.randn(10,3,223,223)
input = Variable(input)
audiocnn(input)
错误:

RuntimeErrorTraceback (most recent call last)
<ipython-input-64-2913316dbb34> in <module>()
----> 1 audiocnn(input)

/home//local/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
    222         for hook in self._forward_pre_hooks.values():
    223             hook(self, input)
--> 224         result = self.forward(*input, **kwargs)
    225         for hook in self._forward_hooks.values():
    226             hook_result = hook(self, input, result)

<ipython-input-60-31881982cca9> in forward(self, x)
     15         x = x.view(10,1,512)
     16         h_0,c_0 = self.init_hidden()
---> 17         _, (_, _) = self.lstm(x,(h_0,c_0)) # x dim : 2 x 1 x 256
     18         assert False
     19         x = x.view(1,1,2*256)

/home/local/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __call__(self, *input, **kwargs)
    222         for hook in self._forward_pre_hooks.values():
    223             hook(self, input)
--> 224         result = self.forward(*input, **kwargs)
    225         for hook in self._forward_hooks.values():
    226             hook_result = hook(self, input, result)

/home//local/lib/python2.7/site-packages/torch/nn/modules/rnn.pyc in forward(self, input, hx)
    160             flat_weight=flat_weight
    161         )
--> 162         output, hidden = func(input, self.all_weights, hx)
    163         if is_packed:
    164             output = PackedSequence(output, batch_sizes)

/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in forward(input, *fargs, **fkwargs)
    349         else:
    350             func = AutogradRNN(*args, **kwargs)
--> 351         return func(input, *fargs, **fkwargs)
    352 
    353     return forward

/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in forward(input, weight, hidden)
    242             input = input.transpose(0, 1)
    243 
--> 244         nexth, output = func(input, hidden, weight)
    245 
    246         if batch_first and batch_sizes is None:

/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in forward(input, hidden, weight)
     82                 l = i * num_directions + j
     83 
---> 84                 hy, output = inner(input, hidden[l], weight[l])
     85                 next_hidden.append(hy)
     86                 all_output.append(output)

/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in forward(input, hidden, weight)
    111         steps = range(input.size(0) - 1, -1, -1) if reverse else range(input.size(0))
    112         for i in steps:
--> 113             hidden = inner(input[i], hidden, *weight)
    114             # hack to handle LSTM
    115             output.append(hidden[0] if isinstance(hidden, tuple) else hidden)

/home//local/lib/python2.7/site-packages/torch/nn/_functions/rnn.pyc in LSTMCell(input, hidden, w_ih, w_hh, b_ih, b_hh)
     29 
     30     hx, cx = hidden
---> 31     gates = F.linear(input, w_ih, b_ih) + F.linear(hx, w_hh, b_hh)
     32 
     33     ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)

/home//local/lib/python2.7/site-packages/torch/nn/functional.pyc in linear(input, weight, bias)
    551     if input.dim() == 2 and bias is not None:
    552         # fused op is marginally faster
--> 553         return torch.addmm(bias, input, weight.t())
    554 
    555     output = input.matmul(weight.t())

/home//local/lib/python2.7/site-packages/torch/autograd/variable.pyc in addmm(cls, *args)
    922         @classmethod
    923         def addmm(cls, *args):
--> 924             return cls._blas(Addmm, args, False)
    925 
    926         @classmethod

/home//local/lib/python2.7/site-packages/torch/autograd/variable.pyc in _blas(cls, args, inplace)
    918             else:
    919                 tensors = args
--> 920             return cls.apply(*(tensors + (alpha, beta, inplace)))
    921 
    922         @classmethod

RuntimeError: save_for_backward can only save input or output tensors, but argument 0 doesn't satisfy this condition
RuntimeErrorTraceback(最近一次调用)
在()
---->1个音频CNN(输入)
/home//local/lib/python2.7/site-packages/torch/nn/modules/module.pyc在__调用中(self,*输入,**kwargs)
222用于钩住自身。_向前_前_钩住。值():
223吊钩(自身,输入)
-->224结果=自我转发(*输入,**kwargs)
225用于钩入式自身。_向前_钩。值():
226钩子结果=钩子(自身、输入、结果)
前进中(自我,x)
15 x=x.视图(10,1512)
16 h_0,c_0=self.init_hidden()
--->17 u,(u,u)=self.lstm(x,(h_0,c_0))#x dim:2x1x156
18断言错误
19 x=x.view(1,1,2*256)
/home/local/lib/python2.7/site-packages/torch/nn/modules/module.pyc in__调用__(self,*输入,**kwargs)
222用于钩住自身。_向前_前_钩住。值():
223吊钩(自身,输入)
-->224结果=自我转发(*输入,**kwargs)
225用于钩入式自身。_向前_钩。值():
226钩子结果=钩子(自身、输入、结果)
/home//local/lib/python2.7/site-packages/torch/nn/modules/rnn.pyc in forward(self,input,hx)
160平重量=平重量
161         )
-->162输出,隐藏=func(输入,自身所有权重,hx)
163如果包装完好:
164输出=包装顺序(输出、批次大小)
/home//local/lib/python2.7/site-packages/torch/nn//\u functions/rnn.pyc前进(输入,*fargs,**fkwargs)
349其他:
350 func=autogradn(*args,**kwargs)
-->351返回函数(输入,*fargs,**fkwargs)
352
353返回前进
/home//local/lib/python2.7/site-packages/torch/nn//\u functions/rnn.pyc in forward(输入、权重、隐藏)
242输入=输入。转置(0,1)
243
-->244下一个,输出=func(输入,隐藏,权重)
245
246如果第一批和第二批尺寸为无:
/home//local/lib/python2.7/site-packages/torch/nn//\u函数/rnn.pyc前进(输入、隐藏、权重)
82 l=i*num_方向+j
83
--->84 hy,输出=内部(输入,隐藏[l],权重[l])
85下一步隐藏。追加(hy)
86所有输出。追加(输出)
/home//local/lib/python2.7/site-packages/torch/nn//\u函数/rnn.pyc前进(输入、隐藏、权重)
111步数=范围(输入.size(0)-1,-1,-1),如果相反,则为范围(输入.size(0))
112对于i,步骤如下:
-->113隐藏=内部(输入[i],隐藏,*重量)
114#黑客处理LSTM
115输出.追加(如果isinstance(hidden,tuple)或else hidden,则隐藏[0]
/home//local/lib/python2.7/site-packages/torch/nn//u functions/rnn.pyc在LSTMCell中(输入、隐藏、w_-ih、w_-hh、b_-ih、b_-hh)
29
30小时,cx=隐藏
--->31门=F.线性(输入,w_-ih,b_-ih)+F.线性(hx,w_-hh,b_-hh)
32
33内浇口,内浇口,外浇口=门。块(4,1)
/home//local/lib/python2.7/site-packages/torch/nn/functional.pyc(输入、权重、偏差)
551如果input.dim()==2且偏差不是无:
552#fused op稍微快一点
-->553返回火炬.addmm(偏差、输入、重量.t())
554
555输出=输入.matmul(weight.t())
/addmm(cls,*args)中的home//local/lib/python2.7/site-packages/torch/autograd/variable.pyc
922@classmethod
923 def addmm(cls,*参数):
-->924返回cls.\u blas(Addmm,args,False)
925
926@classmethod
/home//local/lib/python2.7/site-packages/torch/autograd/variable.pyc in_blas(cls、args、inplace)
918其他:
919张量=args
-->920返回cls.apply(*(张量+(α、β、在位)))
921
922@classmethod
RuntimeError:save_for_backward只能保存输入或输出张量,但参数0不满足此条件
错误消息

RuntimeError:save\u for\u backward只能保存输入或输出张量,但参数0不满足此条件

通常表示您正在传递张量或其他无法将历史记录作为输入存储到模块中的内容。在本例中,您的问题是在
init\u hidden()
中返回张量,而不是
Variable
实例。因此,当LSTM运行时,它无法计算隐藏层的梯度,因为其初始输入不是backprop图的一部分

解决方案:

def init_hidden(self):
    h_0 = torch.randn(2,1,256) #layer * batch * input_dim
    c_0 = torch.randn(2,1,256)
    return Variable(h_0), Variable(c_0)
作为LSTM隐藏状态的初始值,平均值0和方差1也可能没有帮助。理想情况下,您会使初始状态也可培训,例如:

h_0 = torch.zeros(2,1,256) # layer * batch * input_dim
c_0 = torch.zeros(2,1,256)
h_0_param = torch.nn.Parameter(h_0)
c_0_param = torch.nn.Parameter(c_0)

def init_hidden(self):
    return h_0_param, c_0_param

在这种情况下,网络可以了解什么初始状态最适合。注意,在这种情况下,不需要将
h_0_param
包装在
变量中,因为
参数本质上是一个
变量,具有
require_grad=True

这就是我想要的答案!谢谢