对于Keras输入形状、输出形状、获取权重、获取配置和摘要,PyTorch的替代方案是什么
在Keras中,创建模型后,我们可以使用对于Keras输入形状、输出形状、获取权重、获取配置和摘要,PyTorch的替代方案是什么,keras,pytorch,Keras,Pytorch,在Keras中,创建模型后,我们可以使用model.input\u-shape,model.output\u-shape查看其输入和输出形状。对于权重和配置,我们可以分别使用model.get\u weights()和model.get\u config() PyTorch有哪些类似的替代方案?另外,检查PyTorch模型时,我们还需要了解其他功能吗 为了在PyTorch中获得摘要,我们打印modelprint(model),但这比model.summary()提供的信息要少。PyTorch有更
model.input\u-shape
,model.output\u-shape
查看其输入和输出形状。对于权重和配置,我们可以分别使用model.get\u weights()
和model.get\u config()
PyTorch有哪些类似的替代方案?另外,检查PyTorch模型时,我们还需要了解其他功能吗
为了在PyTorch中获得摘要,我们打印modelprint(model)
,但这比model.summary()
提供的信息要少。PyTorch有更好的摘要吗 pytorch中没有“model.summary()”方法。您需要使用模型的内置方法和字段
例如,我定制了inception_v3模型。要获取信息,我需要使用其他许多不同的字段。例如:
在:
出去
Inception3(
(Conv2d_1a_3x3): BasicConv2d(
(conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(Conv2d_2a_3x3): BasicConv2d(
(conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), bias=False)
(bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(Conv2d_2b_3x3): BasicConv2d(
(conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(Conv2d_3b_1x1): BasicConv2d(
(conv): Conv2d(64, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn): BatchNorm2d(80, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
)
(Conv2d_4a_3x3): BasicConv2d(
(conv): Conv2d(80, 192, kernel_size=(3, 3), stride=(1, 1), bias=False)
...
在:
输出:
Conv2d_1a_3x3.conv.weight
Conv2d_1a_3x3.bn.weight
Conv2d_1a_3x3.bn.bias
Conv2d_1a_3x3.bn.running_mean
Conv2d_1a_3x3.bn.running_var
Conv2d_1a_3x3.bn.num_batches_tracked
Conv2d_2a_3x3.conv.weight
Conv2d_2a_3x3.bn.weight
Conv2d_2a_3x3.bn.bias
Conv2d_2a_3x3.bn.running_mean
...
因此,如果我想在Conv2d_1a_3x3获取CNN层的权重,我会查找键“Conv2d_1a_3x3.conv.weight”:
输出:
tensor([[[[-0.2103, -0.3441, -0.0344],
[-0.1420, -0.2520, -0.0280],
[ 0.0736, 0.0183, 0.0381]],
[[ 0.1417, 0.1593, 0.0506],
[ 0.0828, 0.0854, 0.0186],
[ 0.0283, 0.0144, 0.0508]],
...
如果要查看优化器中使用的超参数:
optimizer.param_groups
输出:
print("model.save_dict()["Conv2d_1a_3x3.conv.weight"])
tensor([[[[-0.2103, -0.3441, -0.0344],
[-0.1420, -0.2520, -0.0280],
[ 0.0736, 0.0183, 0.0381]],
[[ 0.1417, 0.1593, 0.0506],
[ 0.0828, 0.0854, 0.0186],
[ 0.0283, 0.0144, 0.0508]],
...
optimizer.param_groups
[{'dampening': 0,
'lr': 0.01,
'momentum': 0.01,
'nesterov': False,
'params': [Parameter containing:
tensor([[[[-0.2103, -0.3441, -0.0344],
[-0.1420, -0.2520, -0.0280],
[ 0.0736, 0.0183, 0.0381]],
...