Python Pytork未使用cuda设备

Python Pytork未使用cuda设备,python,pytorch,Python,Pytorch,我有以下代码: from __future__ import print_function import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np import scipy.io folder = 'small/' mat = scipy.io.loadmat(folder+'INISTATE.mat'); ini_stat

我有以下代码:

from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
import scipy.io

folder = 'small/'
mat = scipy.io.loadmat(folder+'INISTATE.mat');
ini_state = np.float32(mat['ini_state']);
ini_state = torch.from_numpy(ini_state);
ini_state = ini_state.cuda();

mat = scipy.io.loadmat(folder+'TARGET.mat');
target = np.float32(mat['target']);
target = torch.from_numpy(target);
target = target.cuda();

class MLPNet(nn.Module):
    def __init__(self):
        super(MLPNet, self).__init__()
        self.fc1 = nn.Linear(3, 64)
        self.fc2 = nn.Linear(64, 128)
        self.fc3 = nn.Linear(128, 128)
        self.fc4 = nn.Linear(128, 41)
    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = self.fc4(x)
        return x

    def name(self):
        return "MLP"

model = MLPNet();
model = model.cuda();

criterion = nn.MSELoss();
criterion = criterion.cuda();
learning_rate = 0.001;
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) 

batch_size = 20
iter_size = int(target.size(0)/batch_size)
print(iter_size)

for epoch in range(50):
    for i in range(iter_size):  
        start = i*batch_size;
        end = (i+1)*batch_size-1;
        samples = ini_state[start:end,:];
        labels = target[start:end,:];

        optimizer.zero_grad()  # zero the gradient buffer
        outputs = model(samples)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        if (i+1) % 500 == 0:
            print("Epoch %s, batch %s, loss %s" % (epoch, i, loss))
    if (epoch+1) % 7 == 0: 
        for g in optimizer.param_groups:
            g['lr'] = g['lr']*0.1; 

但是当我训练简单的MLP时,CPU的使用率大约是100%,而gpu只有10%左右。阻止使用GPU的问题是什么?

实际上,您的型号确实是在GPU而不是CPU上运行的。GPU使用率低的原因是,您的模型和批大小都很小,这就需要较低的计算成本。您可以尝试将批处理大小增加到1000左右,GPU的使用率应该更高。事实上,Pytork可以防止混合CPU和GPU数据的操作,例如,不能将GPU张量和CPU张量相乘。因此,通常情况下,网络的一部分不太可能在CPU上运行,而另一部分在GPU上运行,除非您有意设计它

顺便说一下,对于神经网络来说,数据洗牌是必要的。当您使用小批量训练时,在每次迭代中,您希望小批量近似于整个数据集。如果不进行数据洗牌,很可能小批量中的样本高度相关,从而导致参数更新的偏差估计。PyTorch提供的数据加载器可以帮助您进行数据洗牌