将PyTorch模型转换为TorchScript时出错

将PyTorch模型转换为TorchScript时出错,pytorch,torchscript,Pytorch,Torchscript,我正试着跟着你走 以下示例代码起作用: import torch import torchvision # An instance of your model. model = torchvision.models.resnet18() # An example input you would normally provide to your model's forward() method. example = torch.rand(1, 3, 224, 224) # Use torch

我正试着跟着你走

以下示例代码起作用:

import torch
import torchvision

# An instance of your model.
model = torchvision.models.resnet18()

# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)
但是,在尝试其他网络时,如squeezenet(或alexnet),我的代码失败:

sq = torchvision.models.squeezenet1_0(pretrained=True)
traced_script_module = torch.jit.trace(sq, example) 

>> traced_script_module = torch.jit.trace(sq, example)                                      
/home/fabio/.local/lib/python3.6/site-packages/torch/jit/__init__.py:642: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function.
 Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 785] (3.1476082801818848 vs. 3.945478677749634) and 999 other locations (100.00%)
  _check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace)

我刚刚发现从torchvision加载的模型。默认情况下,模型处于train模式。AlexNet和SqueezeNet都有退出层,如果处于训练模式,则推理不确定。只需更改为评估模式即可解决此问题:

sq = torchvision.models.squeezenet1_0(pretrained=True)
sq.eval()
traced_script_module = torch.jit.trace(sq, example)