Python 如何仅从gpu交换到cpu?

Python 如何仅从gpu交换到cpu?,python,gpu,pytorch,Python,Gpu,Pytorch,嗨,我想知道如何在我的CPU上运行机器学习代码而不是GPU 我曾尝试在设置文件上使GPU为假,但它无法修复它 ###全局设置 GPU = False # running on GPU is highly suggested CLEAN = False

嗨,我想知道如何在我的CPU上运行机器学习代码而不是GPU

我曾尝试在设置文件上使GPU为假,但它无法修复它

###全局设置
GPU = False                                                                 # running on GPU is highly suggested
CLEAN = False                                                                # set to "True" if you want to clean the temporary large files after generating result
APP = "classification"                                                       # Do not change! mode choide: "classification", "imagecap", "vqa". Currently "imagecap" and "vqa" are not supported.
CATAGORIES = ["object", "part"]                                              # Do not change! concept categories that are chosen to detect: "object", "part", "scene", "material", "texture", "color"
map_location='cpu'

CAM_THRESHOLD = 0.5                                                          # the threshold used for CAM visualization
FONT_PATH = "components/font.ttc"                                            # font file path
FONT_SIZE = 26                                                               # font size
SEG_RESOLUTION = 7                                                           # the resolution of cam map
BASIS_NUM = 7       
回溯(最近一次呼叫最后一次):
文件“test.py”,第22行,在
model=loadmodel()
文件“/home/joshuayun/Desktop/IBD/loader/model_loader.py”,第44行,在loadmodel中
检查点=torch.load(settings.MODEL\u文件)
文件“/home/joshuayun/.local/lib/python3.6/site packages/torch/serialization.py”,第387行,已加载
返回加载(f,映射位置,pickle模块,**pickle加载参数)
文件“/home/joshuayun/.local/lib/python3.6/site-packages/torch/serialization.py”,第574行,装入
结果=unpickler.load()
文件“/home/joshuayun/.local/lib/python3.6/site packages/torch/serialization.py”,第537行,持续加载
反序列化的\u对象[根\u键]=还原\u位置(obj,位置)
文件“/home/joshuayun/.local/lib/python3.6/site packages/torch/serialization.py”,第119行,默认位置
结果=fn(存储、位置)
文件“/home/joshuayun/.local/lib/python3.6/site packages/torch/serialization.py”,第95行,反序列化
设备=验证\u cuda\u设备(位置)
文件“/home/joshuayun/.local/lib/python3.6/site packages/torch/serialization.py”,第79行,在validate\u cuda\u设备中
raise RUNTIMERROR('试图反序列化CUDA上的对象'
RuntimeError:尝试反序列化CUDA设备上的对象,但torch.CUDA.is_available()为False。如果您在仅CPU的计算机上运行,请使用带有map_location='CPU'的torch.load将存储映射到CPU。

如果您使用的是从
nn.Module
扩展而来的型号,您可以将整个型号移动到CPU或GPU,执行以下操作:

device = torch.device("cuda")
model.to(device)
# or
device = torch.device("cpu")
model.to(device)
如果只想移动张量:

x = torch.Tensor(10).cuda()
# or
x = torch.Tensor(10).cpu()

我希望这有帮助

如果我没有错的话,您会在code
model=loadmodel()
中遇到上述错误。我不知道您在
loadmodel()
中做了什么,但您可以尝试以下几点:

  • defaults.device
    设置为
    cpu
    。为了完全确定,请添加
    torch.cuda.Set\u设备(“cpu”)
  • torch.load(型号重量)
    更改为
    torch.load(型号重量,地图位置=torch.device('cpu'))

请不要在同一个问题中同时使用
python-3.x
python-2.7
标记-这没有任何意义。您是否尝试过
torch.cuda.set_设备(“cpu”)
x = torch.Tensor(10).cuda()
# or
x = torch.Tensor(10).cpu()