Tensorflow Keras中的BatchNormalization层提供了意外的输出值

Tensorflow Keras中的BatchNormalization层提供了意外的输出值,tensorflow,keras,normalization,batch-normalization,Tensorflow,Keras,Normalization,Batch Normalization,给定输入值[1,5]并对其进行规范化,应该会产生类似[-1,1]的结果,因为 然而,这是一个极小的例子 import numpy as np import keras from keras.models import Model from keras.layers import Input from keras.layers.normalization import BatchNormalization from keras import backend as K shape = (1,2,

给定输入值
[1,5]
并对其进行规范化,应该会产生类似
[-1,1]
的结果,因为

然而,这是一个极小的例子

import numpy as np

import keras
from keras.models import Model
from keras.layers import Input
from keras.layers.normalization import BatchNormalization
from keras import backend as K

shape = (1,2,1)
input = Input(shape=shape)
x = BatchNormalization(center=False)(input) # no beta
model = Model(inputs=input, outputs=x)
model.compile(loss='mse', optimizer='sgd')

# training with dummy data
training_in = [np.random.random(size=(10, *shape))]
training_out = [np.random.random(size=(10, *shape))]
model.fit(training_in, training_out, epochs=10)

data_in = np.array([[[[1], [5]]]], dtype=np.float32)
data_out = model.predict(data_in)

print('gamma   :', K.eval(model.layers[1].gamma))
#print('beta    :', K.eval(model.layers[1].beta))
print('moving_mean:', K.eval(model.layers[1].moving_mean))
print('moving_variance:', K.eval(model.layers[1].moving_variance))

print('epsilon :', model.layers[1].epsilon)
print('data_in :', data_in)
print('data_out:', data_out)
生成以下输出:

gamma   : [ 0.80644524]
moving_mean: [ 0.05885344]
moving_variance: [ 0.91000736]
epsilon : 0.001
data_in : [[[[ 1.]
   [ 5.]]]]
data_out: [[[[ 0.79519051]
   [ 4.17485714]]]]
因此它是
[0.79519051,4.17485714]
而不是
[-1,1]

我看了一下,这些值似乎被转发到了。这个结果应该是我所期望的,但显然不是


那么输出值是如何计算的呢?

如果你使用的是
gamma
,正确的公式实际上是
result=gamma*(x-mean)/sqrt(var)
进行批量标准化,但是
mean
var
并不总是相同的:

  • 在培训(fit)过程中,它们是使用批次的输入值计算的
    mean_batch
    var_batch
    (它们只是批次的平均值和方差)),就像您正在做的那样。同时,全局
    移动平均值
    移动方差
    通过以下方式学习:
    移动平均值=α*移动平均值+(1-α)*平均值批量
    ,α是一种学习率,单位为(0,1),通常高于0.9
    moving_mean
    moving_variance
    是所有训练数据的真实均值和方差的近似值<代码>伽马也通过通常的梯度下降学习,以最适合您的输出

  • 在推断(预测)过程中,您只需使用学习到的
    moving_mean
    moving_variance
    ,而不是
    mean_batch
    var_batch
    。您还可以使用已学习的
    伽马

因此,
0.05885344
只是随机输入数据的平均值的近似值,
0.91000736
是其方差的近似值,您使用这些值来规范化新数据[1,5]。您可以轻松地检查
[0.79519051,4.17485714]=gamma*([1,5]-moving_-mean)/sqrt(moving_-var)


编辑:
alpha
在keras中被称为动量,如果你想检查它的话。

正确的公式是:

result = gamma * (input - moving_mean) / sqrt(moving_variance + epsilon) + beta
下面是一个用于验证的脚本:

import math
import numpy as np
import tensorflow as tf
from keras import backend as K

from keras.models import Model
from keras.layers import Input
from keras.layers.normalization import BatchNormalization

np.random.seed(0)

print('=== keras model ===')
input_shape = (1,2,1)
input = Input(shape=input_shape)
x = BatchNormalization()(input)
model = Model(inputs=input, outputs=x)
model.compile(loss='mse', optimizer='sgd')
training_in = [np.random.random(size=(10, *input_shape))]
training_out = [np.random.random(size=(10, *input_shape))]
model.fit(training_in, training_out, epochs=100, verbose=0)
data_in = [[[1.0], [5.0]]]
data_model = np.array([data_in])
result = model.predict(data_model)
gamma = K.eval(model.layers[1].gamma)
beta = K.eval(model.layers[1].beta)
moving_mean = K.eval(model.layers[1].moving_mean)
moving_variance = K.eval(model.layers[1].moving_variance)
epsilon = model.layers[1].epsilon
print('gamma:          ', gamma)
print('beta:           ', beta)
print('moving_mean:    ', moving_mean)
print('moving_variance:', moving_variance)
print('epsilon:        ', epsilon)
print('data_in:        ', data_in)
print('result:         ', result)

print('=== numpy ===')
np_data = [data_in[0][0][0], data_in[0][1][0]]
np_mean = moving_mean[0]
np_variance = moving_variance[0]
np_offset = beta[0]
np_scale = gamma[0]
np_result = [np_scale * (x - np_mean) / math.sqrt(np_variance + epsilon) + np_offset for x in np_data]
print(np_result)

print('=== tensorflow ===')
tf_data = tf.constant(data_in)
tf_mean = tf.constant(moving_mean)
tf_variance = tf.constant(moving_variance)
tf_offset = tf.constant(beta)
tf_scale = tf.constant(gamma)
tf_variance_epsilon = epsilon
tf_result = tf.nn.batch_normalization(tf_data, tf_mean, tf_variance, tf_offset, tf_scale, tf_variance_epsilon)
tf_sess = tf.Session()
print(tf_sess.run(tf_result))

print('=== keras backend ===')
k_data = K.constant(data_in)
k_mean = K.constant(moving_mean)
k_variance = K.constant(moving_variance)
k_offset = K.constant(beta)
k_scale = K.constant(gamma)
k_variance_epsilon = epsilon
k_result = K.batch_normalization(k_data, k_mean, k_variance, k_offset, k_scale, k_variance_epsilon)
print(K.eval(k_result))
输出:

gamma:           [ 0.22297101]
beta:            [ 0.49253803]
moving_mean:     [ 0.36868709]
moving_variance: [ 0.41429576]
epsilon:         0.001
data_in:         [[[1.0], [5.0]]]
result:          [[[[ 0.71096909]
   [ 2.09494853]]]]

=== numpy ===
[0.71096905498374263, 2.0949484904433255]

=== tensorflow ===
[[[ 0.71096909]
  [ 2.09494853]]]

=== keras backend ===
[[[ 0.71096909]
  [ 2.09494853]]]

太棒了,非常感谢。当
center=True
时,您能告诉我beta在公式中的位置吗?我猜谁应该是
output=gamma*(输入-移动平均值)/sqrt(移动方差)+beta
,但当我的值不匹配时。这应该是你根据tensorflow所说的。看起来它可能在他们的文档中执行
output=gamma*((输入-移动平均值)/sqrt(移动方差)+beta)
。但是没有一个完全合适,我不知道为什么…我发现了这个问题。我们没有使用epsilon。完全正确的公式是
result=gamma*(输入-移动平均值)/sqrt(移动方差+ε)+beta
。我试过了,但没有得到正确的错误。。也许我只是把数字弄错了,或者是错了。。。
gamma:           [ 0.22297101]
beta:            [ 0.49253803]
moving_mean:     [ 0.36868709]
moving_variance: [ 0.41429576]
epsilon:         0.001
data_in:         [[[1.0], [5.0]]]
result:          [[[[ 0.71096909]
   [ 2.09494853]]]]

=== numpy ===
[0.71096905498374263, 2.0949484904433255]

=== tensorflow ===
[[[ 0.71096909]
  [ 2.09494853]]]

=== keras backend ===
[[[ 0.71096909]
  [ 2.09494853]]]