Pytorch Pytork';s输入类型(torch.FloatTensor)和权重类型(torch.cuda.FloatTensor)应该相同,但我的数据已推送到GPU

Pytorch Pytork';s输入类型(torch.FloatTensor)和权重类型(torch.cuda.FloatTensor)应该相同,但我的数据已推送到GPU,pytorch,Pytorch,我收到错误消息: RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same 然而,我确保在GPU上发送我的数据和模型。有人能帮忙吗 我的代码是: net.cuda() ''' print('pyTorch style summay: ',net) print('Keras style summary:\n') summary(net,(2,

我收到错误消息:

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
然而,我确保在GPU上发送我的数据和模型。有人能帮忙吗

我的代码是:

net.cuda()

'''
print('pyTorch style  summay: ',net)
print('Keras style summary:\n')
summary(net,(2,128,128))
'''

criterion=nn.MSELoss()
#optimizer = optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9)
learning_rate = 1e-4
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate)
print('\nLossFun=',str(criterion))

hf=h5py.File(fn,'r')
print(hf['trainingset'])
tr=np.array(hf['trainingset'])
trtg=np.array(hf['targetsTraining'])
hf.close()


tr = np.moveaxis(tr,3,2)
trtg = np.moveaxis(trtg,3,2)

tr = torch.FloatTensor(tr)
trtg = torch.FloatTensor(trtg)

tr.cuda()
trtg.cuda()


batch_size=16
epochs=2
# run the main training loop
for epoch in range(epochs):
    for batch_idx in range(batch_size):#batch_idx, (data, target) in enumerate(train_loader):
        data =  tr[batch_idx:batch_idx+batch_size-1,:,:,:] 
        target =  trtg[batch_idx:batch_idx+batch_size-1,:,:,:] 
        data, target = Variable(data), Variable(target)
        # resize data from (batch_size, 1, 28, 28) to (batch_size, 28*28)
        #data = data.view(-1, 28*28)
        optimizer.zero_grad()
        net_out = net(data)
        loss = criterion(net_out, target)
        loss.backward()
        optimizer.step()
        batch_idx += 1
        if batch_idx % log_interval == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(epoch, batch_idx * len(data), len(train_loader.dataset),100. * batch_idx / len(train_loader), loss.data[0]))
我不明白为什么tr.cuda()trtg.cuda()不够!我怎样才能强迫他们去cuda?

实际上并没有改变张量。它在GPU上创建张量的副本并返回副本。考虑到这一点,你可能想要

tr = tr.cuda()
trtg = trtg.cuda()

这实际上与就地执行的操作不同,实际上修改了模块的注册参数和缓冲区。

哦,谢谢,这实际上是一个非常混乱的行为。再说一遍,也许您想快速查看另一个问题,这可能很有趣,因为这是pyTorch w.r.t张量和傅里叶逆变换的一种奇怪行为: