Python Cuda版本适用于Pytorch中的cat?

Python Cuda版本适用于Pytorch中的cat?,python,pytorch,Python,Pytorch,我正在尝试建立一个CNN架构来嵌入单词。我试图使用torch.cat连接两个张量,但它抛出了以下错误: 22 print(z1.size()) ---> 23 zcat = torch.cat(lz, dim = 2) 24 print("zcat",zcat.size()) 25 zcat2=zcat.reshape([batch_size, 1, 100, 3000]) RuntimeErr

我正在尝试建立一个CNN架构来嵌入单词。我试图使用torch.cat连接两个张量,但它抛出了以下错误:

     22         print(z1.size())
---> 23         zcat = torch.cat(lz, dim = 2)
     24         print("zcat",zcat.size())
     25         zcat2=zcat.reshape([batch_size, 1, 100, 3000])

RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 1 in sequence argument at position #1 'tensors'

附加架构以供参考:

    def __init__(self,vocab_size,embedding_dm,pad_idx):
        super().__init__()
        self.embedding = nn.Embedding(vocab_size,embedding_dim,padding_idx = pad_idx)
        self.convs = nn.ModuleList([nn.Conv2d(in_channels = 1,out_channels = 50,kernel_size = (1,fs)) for fs in (3,4,5)])
        self.conv2 = nn.Conv2d(in_channels = 50,out_channels = 100,kernel_size = (1,2))
        self.fc1 = nn.Linear(100000,150) #Change this 
        self.fc2 = nn.Linear(150,1)
        self.dropout = nn.Dropout(0.5)
    def forward(self,text):
        print("text",text.size())
        embedded = self.embedding(text.T)
        embedded = embedded.permute(0, 2,1)
        print("embedded",embedded.size())
        x=embedded.size(2)
        y=3000-x
        print(y,"hello")
        batch_size=embedded.size(0)
        z=np.zeros((batch_size,100,y))
        z1=torch.from_numpy(z).float()
        lz=[embedded,z1]
        print(z1.size())
        zcat = torch.cat(lz, dim = 2)
        print("zcat",zcat.size())
        zcat2=zcat.reshape([batch_size, 1, 100, 3000])
        print("zcat2",zcat2.size())
#         embedded = embedded.reshape([embedded.shape[0],1,])
        print(embedded.size(),"embedding")
        conved = [F.relu(conv(embedded)) for conv in self.convs]
        pooled = [F.max_pool2d(conv,(1,2)) for conv in conved]
        print("Pool")
        for pl in pooled:
            print(pl.size())
        cat = torch.cat(pooled,dim = 3)
        print("cat",cat.size())
        conved2 = F.relu(self.conv2(cat))
        print("conved2",conved2.size())
        pooled2 = F.max_pool2d(conved2,(1,2))
        print(pooled2.size(),"pooled2")
        return 0
#         return pooled2 
我做错什么了吗?谢谢你的帮助。谢谢

明白了。 只需使用以下方法创建张量: -
torch.zero(批量大小,100,y,dtype=embedded.dtype,device=embedded.device)