Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/324.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何修复在GRU'中使用DataParallel时的错误;s网络_Python_Deep Learning_Pytorch_Recurrent Neural Network - Fatal编程技术网

Python 如何修复在GRU'中使用DataParallel时的错误;s网络

Python 如何修复在GRU'中使用DataParallel时的错误;s网络,python,deep-learning,pytorch,recurrent-neural-network,Python,Deep Learning,Pytorch,Recurrent Neural Network,我正试图像文档中解释的那样将dataparallel应用到GRU的网络中,我不断地得到相同的错误 """Defines the neural network, losss function and metrics""" import numpy as np import torch import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self, params

我正试图像文档中解释的那样将dataparallel应用到GRU的网络中,我不断地得到相同的错误

"""Defines the neural network, losss function and metrics"""

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):
    def __init__(self, params, anchor_is_phrase):
        """
        Simple LSTM, used to generate the LSTM for both the word and video
        embeddings. 
        Args:
            params: (Params) contains vocab_size, embedding_dim, lstm_hidden_dim
            is_phrase: is word lstm or the vid lstm
        """
        super(Net, self).__init__()
        if anchor_is_phrase:
            self.lstm = nn.DataParallel(nn.GRU(params.word_embedding_dim, params.hidden_dim, 1)).cuda()#, batch_first=True)
        else:
            self.lstm = nn.DataParallel(nn.GRU(params.vid_embedding_dim, params.hidden_dim, 1)).cuda() #, batch_first=True)

    def forward(self, s, anchor_is_phrase = False):
        """
        Forward prop. 
        """
        s, _ = self.lstm(s)
        s.data.contiguous()
        return s
错误发生在前一代码中的第s行=self.lstm处:

 here:  s, _ = self.lstm(s)
        s.data.contiguous()
        return s
我收到以下错误消息:

    s, _ = self.lstm(s)
  File "/home/pavelameen/miniconda3/envs/TD2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/pavelameen/miniconda3/envs/TD2/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
    outputs = self.parallel_apply(replicas, inputs, kwargs)
  File "/home/pavelameen/miniconda3/envs/TD2/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
    return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
  File "/home/pavelameen/miniconda3/envs/TD2/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
    raise output
  File "/home/pavelameen/miniconda3/envs/TD2/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
    output = module(*input, **kwargs)
  File "/home/pavelameen/miniconda3/envs/TD2/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/pavelameen/miniconda3/envs/TD2/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 193, in forward
    max_batch_size = input.size(0) if self.batch_first else input.size(1)
AttributeError: 'tuple' object has no attribute 'size'

有趣的是,我试图在第27行中输出s的类型,我得到了PackedSequence,为什么它在lstm正向方法中转换为元组?

nn.GRU
(第181行)a或tesnor作为输入。如错误中所述,您正在传递一个元组对象
s
intead.

nn.GRU
(第181行)或tesnor作为输入。如错误中所述,您正在传递一个元组对象
s
intead。

在传递到nn.GRU之前,我打印了s的类型,并获得了PackedSequence!!!这很奇怪,因为您的错误来自一行,该行只有在输入不是
PackedSequence
时才会执行。参见上述链接中的第181至191行。或者你可以查看你正在使用的pytorch的相应版本。在传递到nn.GRU之前,我打印了s的类型,我得到了PackedSequence!!!这很奇怪,因为您的错误来自一行,该行只有在输入不是
PackedSequence
时才会执行。参见上述链接中的第181至191行。或者,您可以查看正在使用的pytorch的相应版本。