Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
加载操作后Pytorch GPU内存增加_Pytorch - Fatal编程技术网

加载操作后Pytorch GPU内存增加

加载操作后Pytorch GPU内存增加,pytorch,Pytorch,我有一个386MB大小的pytorch模型,但是当我加载模型时 state = torch.load(f, flair.device) model_state = self._get_state_dict() # additional fields for model checkpointing model_state["optimizer_state_dict"] = optimizer_state model_state["scheduler_state_dict"] = schedul

我有一个386MB大小的pytorch模型,但是当我加载模型时

state = torch.load(f, flair.device)
model_state = self._get_state_dict()

# additional fields for model checkpointing
model_state["optimizer_state_dict"] = optimizer_state
model_state["scheduler_state_dict"] = scheduler_state
model_state["epoch"] = epoch
model_state["loss"] = loss

torch.save(model_state, str(model_file), pickle_protocol=4)
我的GPU内存高达900MB,为什么会出现这种情况,有没有办法解决

这就是我保存模型的方式

state = torch.load(f, flair.device)
model_state = self._get_state_dict()

# additional fields for model checkpointing
model_state["optimizer_state_dict"] = optimizer_state
model_state["scheduler_state_dict"] = scheduler_state
model_state["epoch"] = epoch
model_state["loss"] = loss

torch.save(model_state, str(model_file), pickle_protocol=4)

可能是
优化器的状态
占用了额外的空间。一些优化器(如Adam)跟踪每个可训练参数的统计信息,如一阶和二阶矩。可以看出,这些信息占用了空间

您可以先加载到CPU:

state=torch.load(f,map\u位置=torch.device('cpu'))

我不需要在推理过程中加载它,对吗?我的意思是,有没有一种方法我不能加载Optimizer状态?或者我现在被卡住了吗?@Ryan你可以加载到CPU,而不是直接加载到GPU,然后只将模型移动到GPU。最后一个问题,我该怎么做?