Deep learning pytorch:basic operations实现的实例规范与torch.nn.InstanceNorm2d的结果不同

Deep learning pytorch:basic operations实现的实例规范与torch.nn.InstanceNorm2d的结果不同,deep-learning,pytorch,Deep Learning,Pytorch,我从头开始通过pytorch基本操作实现实例规范。但结果与torch.nn.InstanceNorm2d不同。有人能帮我吗?下面是我的代码: import torch import numpy as np x = torch.rand((8, 16, 32, 32)) a = torch.nn.InstanceNorm2d(256) a.eval() with torch.no_grad():     b = a(x) x_

我从头开始通过pytorch基本操作实现实例规范。但结果与torch.nn.InstanceNorm2d不同。有人能帮我吗?下面是我的代码:

import torch
import numpy as np
x = torch.rand((8, 16, 32, 32))
a = torch.nn.InstanceNorm2d(256)
a.eval()
with torch.no_grad():
    b = a(x)
x_mean = torch.mean(x, axis=(2,3), keepdims=True)
x_var = torch.var(x, axis=(2,3), keepdims=True)
x_norm = (x - x_mean) / torch.sqrt(x_var + 1e-5)
b_numpy = b.numpy()
x_norm_numpy = x_norm.numpy()
\# check if b_numpy and x_norm_numpy close to the torlerance of 1e-3
print(np.allclose(b_numpy, x_norm_numpy, atol=1e-3))
\# check if b_numpy and x_norm_numpy close to the torlerance of 1e-3
print(np.allclose(b_numpy, x_norm_numpy, atol=1e-4))
结果:

True
False
因此,结果表明,当精度达到1e-4时,它们是不同的。我不知道为什么。有人能帮我得到一个更接近火炬.nn.InstanceNorm2d的结果吗?


顺便说一句,我在论文中没有使用公式
gamma*x\u normalized\u numpy+beta
的原因是我发现当第一次初始化
torch.nn.InstanceNorm2d
时,所有gamma都初始化为
[1.0,1.0,1.0,…]
,所有beta都初始化为
[0.0,0.0,…]
。因此,在这种情况下,
x\u normalized\u numpy=gamma*x\u normalized\u numpy+beta

PyTorch的实例Norm实现


尝试使用
torch.var(x,axis=(2,3),keepdims=True,unbiased=False)

PyTorch的实例规范实现

尝试使用
torch.var(x,axis=(2,3),keepdims=True,unbiased=False)