Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python tensorflow中推理时的批量归一化_Python_Tensorflow_Machine Learning_Deep Learning_Normalization - Fatal编程技术网

Python tensorflow中推理时的批量归一化

Python tensorflow中推理时的批量归一化,python,tensorflow,machine-learning,deep-learning,normalization,Python,Tensorflow,Machine Learning,Deep Learning,Normalization,我已加载经过训练的检查点文件以进行推断。我已经从模型中提取了beta、移动平均值和移动方差以及所有权重。在批标准化中,当我手动计算批标准化的输出时,我得到了错误的结果。 [更新] 在这里,我分享我的代码,它加载检查点,打印批量标准化的输入,打印beta,移动平均值和移动方差,并在控制台上打印批量标准化的输出 import tensorflow as tf import cv2 import numpy as np import time import os def main(): wi

我已加载经过训练的检查点文件以进行推断。我已经从模型中提取了beta、移动平均值和移动方差以及所有权重。在批标准化中,当我手动计算批标准化的输出时,我得到了错误的结果。 [更新]

在这里,我分享我的代码,它加载检查点,打印批量标准化的输入,打印beta,移动平均值和移动方差,并在控制台上打印批量标准化的输出

import tensorflow as tf
import cv2
import numpy as np
import time
import os

def main():
    with tf.Session() as sess:        

        #[INFO] code for loading checkpoint
        #---------------------------------------------------------------------
        saver = tf.train.import_meta_graph("./bag-model-34000.meta")
        saver.restore(sess, tf.train.latest_checkpoint("./"))
        graph = tf.get_default_graph()
        input_place = graph.get_tensor_by_name('input/image_input:0')
        op = graph.get_tensor_by_name('output/image_output:0')
        #----------------------------------------------------------------------

        #[INFO] generating input data which is equal to input tensor shape
        #----------------------------------------------------------------------
        input_data = np.random.randint(255, size=(1,320,240, 3)).astype(float)
        #----------------------------------------------------------------------

        #[INFO] code to get all tensors_name
        #----------------------------------------------------------------------
        operations = sess.graph.get_operations()
        ind = 0;
        tens_name = []  # store all tensor name in list
        for operation in operations:
            #print(ind,"> ", operation.name, "=> \n", operation.values())

            if (operation.values()): 
                name_of_tensor = str(operation.values()).split()[1][1:-1]

            tens_name.append(name_of_tensor)
            ind = ind + 1
        #------------------------------------------------------------------------

        #[INFO] printing Input to batch normalization, beta, moving mean and moving variance
        # so I can calculate manually batch normalization output
        #------------------------------------------------------------------------   
        tensor_number = 0
        for tname in tens_name:         # looping through each tensor name

            if tensor_number <= 812:      # I am interested in first 812 tensors
                tensor = graph.get_tensor_by_name(tname)
                tensor_values = sess.run(tensor, feed_dict={input_place: input_data})
                print("tensor: ", tensor_number, ": ", tname, ": \n\t\t", tensor_values.shape)


                # [INFO] 28'th tensor its name is "input/conv1/conv1_1/separable_conv2d:0"
                # the output of this tensor is input to the batch normalization
                if tensor_number == 28:
                    # here I am printing this tensor output
                    print(tensor_values)            # [[[[-0.03182551  0.00226904  0.00440771 ... 
                    print(tensor_values.shape)      # (1, 320, 240, 32)


                # [INFO] 31'th tensor its name is "conv1/conv1_1/BatchNorm/beta:0"
                # the output of this tensor is all beta
                if tensor_number == 31:
                    # here I am printing this beta's
                    print(tensor_values)            # [ 0.04061257 -0.16322449 -0.10942575 ...
                    print(tensor_values.shape)      # (32,)


                # [INFO] 35'th tensor its name is "conv1/conv1_1/BatchNorm/moving_mean:0"
                # the output of this tensor is all moving mean
                if tensor_number == 35:
                    # here I am printing this moving means
                    print(tensor_values)            # [-0.0013569   0.00618145  0.00248459 ...
                    print(tensor_values.shape)      # (32,)


                # [INFO] 39'th tensor its name is "conv1/conv1_1/BatchNorm/moving_variance:0"
                # the output of this tensor is all moving_variance
                if tensor_number == 39:
                    # here I am printing this moving variance
                    print(tensor_values)            # [4.48082483e-06 1.21615967e-05 5.37582537e-06 ...
                    print(tensor_values.shape)      # (32,)


                # [INFO] 44'th tensor its name is "input/conv1/conv1_1/BatchNorm/FusedBatchNorm:0"
                # here perform batch normalization and here I am printing the output of this tensor
                if tensor_number == 44:
                    # here I am printing the output of this tensor
                    print(tensor_values)            # [[[[-8.45019519e-02  1.23237416e-01 -4.60943699e-01 ...
                    print(tensor_values.shape)      # (1, 320, 240, 32)

            tensor_number = tensor_number + 1
        #---------------------------------------------------------------------------------------------

if __name__ == "__main__":
    main()
所以,我很困惑,为什么我的计算结果与结果值相差很大

此外,我还使用tf.compat.v1.global_variables()检查了beta、移动平均值和移动方差。所有值都与控制台上打印的beta值、移动平均值和移动方差值相匹配

那么,为什么我在手动替换等式
x_hat
y
中的值后得到错误的结果呢

另外,我在这里提供了我的控制台输出,从张量_编号28到44

tensor:  28 :  input/conv1/conv1_1/separable_conv2d:0 : 
                 (1, 320, 240, 32)
[[[[-0.03182551  0.00226904  0.00440771 ... -0.01204819  0.02620635

tensor:  29 :  input/conv1/conv1_1/BatchNorm/Const:0 : 
                 (32,)
tensor:  30 :  conv1/conv1_1/BatchNorm/beta/Initializer/zeros:0 : 
                 (32,)

tensor:  31 :  conv1/conv1_1/BatchNorm/beta:0 : 
                 (32,)
[ 0.04061257 -0.16322449 -0.10942575  0.05056419 -0.13785222  0.4060304

tensor:  32 :  conv1/conv1_1/BatchNorm/beta/Assign:0 : 
                 (32,)
tensor:  33 :  conv1/conv1_1/BatchNorm/beta/read:0 : 
                 (32,)
tensor:  34 :  conv1/conv1_1/BatchNorm/moving_mean/Initializer/zeros:0 : 
                 (32,)

tensor:  35 :  conv1/conv1_1/BatchNorm/moving_mean:0 : 
                 (32,)
[-0.0013569   0.00618145  0.00248459  0.00340403  0.00600711  0.00291052

tensor:  36 :  conv1/conv1_1/BatchNorm/moving_mean/Assign:0 : 
                 (32,)
tensor:  37 :  conv1/conv1_1/BatchNorm/moving_mean/read:0 : 
                 (32,)
tensor:  38 :  conv1/conv1_1/BatchNorm/moving_variance/Initializer/ones:0 : 
                 (32,)

tensor:  39 :  conv1/conv1_1/BatchNorm/moving_variance:0 : 
                 (32,)
[4.48082483e-06 1.21615967e-05 5.37582537e-06 1.40261754e-05

tensor:  40 :  conv1/conv1_1/BatchNorm/moving_variance/Assign:0 : 
                 (32,)
tensor:  41 :  conv1/conv1_1/BatchNorm/moving_variance/read:0 : 
                 (32,)
tensor:  42 :  input/conv1/conv1_1/BatchNorm/Const_1:0 : 
                 (0,)
tensor:  43 :  input/conv1/conv1_1/BatchNorm/Const_2:0 : 
                 (0,)

tensor:  44 :  input/conv1/conv1_1/BatchNorm/FusedBatchNorm:0 : 
                 (1, 320, 240, 32)
[[[[-8.45019519e-02  1.23237416e-01 -4.60943699e-01 ...  3.77691090e-01


我解决了这个问题,对于批量规范化操作,我认为这是在培训中

因此,它使用批次平均值、批次方差和贝塔值作为0,而不是提供的移动平均值、移动方差和贝塔值

所以我计算了批次平均值,批次方差,并将这些值代入方程,得到正确的输出

那么,如何才能迫使他使用移动平均值和移动方差,并提供贝塔? 我试图通过将训练设置为false来进行此更改。但它不起作用

for tname in tens_name:         # looping through each tensor name

            if tensor_number <= 812:      # I am interested in first 812 tensors
                training = tf.placeholder(tf.bool, name = 'training')
                is_training = tf.placeholder(tf.bool, name = 'is_training')
                tensor = graph.get_tensor_by_name(tname)
                tensor_values = sess.run(tensor, feed_dict={is_training: False, training: False, input_place: input_data})

你是如何提取你的价值观的?你的平均值为1e-3,方差为4.5e-6,这意味着0.02是许多正标准偏差,因此在我看来,标准化值10对于这些值是完全合理的。因此,我怀疑这些不是该批次规范层的正确值,或者您的输入值不正确,因此请更新您的问题,说明如何获得这些值及其输入(例如,输入在输入模型之前是否也进行了规范化)。感谢您的评论。在这里,我分享了我的代码,它描述了我是如何获得批量标准化、beta、移动平均值和移动方差的输入的。我正在达到与您相同的值。你能把张量29的值打印出来吗?我认为这可能会影响x张量的值,但我不确定,因为它同时具有“输入”和批处理范数的范围。你能解释一下吗?我已经解决了这个问题,对于批量标准化操作,它使用批量平均值、批量方差和beta作为0,而不是提供移动平均值、移动方差和beta。所以我计算了批次平均值,批次方差,并将这些值代入方程,得到正确的输出。感谢您的帮助。您的“培训”和“培训”是否实际用于图形中,还是您只是定义它们并将它们输入模型而不将它们放入图形中?培训是否用于图形中。因为在您提供的代码片段中,您是在代码中定义它们,而不是将它们放入图形中。您能在实际代码中显示它们的使用位置吗?不要将True传递给函数,而是将占位符传递给load\u cnn函数。在您的第一个代码中,您正在创建一个新的“is_training”占位符,该占位符与图形中使用的占位符没有任何连接。我只有在加载检查点后才有一个检查点文件,如何将其设为false并用于推断。
for tname in tens_name:         # looping through each tensor name

            if tensor_number <= 812:      # I am interested in first 812 tensors
                training = tf.placeholder(tf.bool, name = 'training')
                is_training = tf.placeholder(tf.bool, name = 'is_training')
                tensor = graph.get_tensor_by_name(tname)
                tensor_values = sess.run(tensor, feed_dict={is_training: False, training: False, input_place: input_data})
def load_cnn(self,keep_prob = 0.5, num_filt = 32, num_layers = 2,is_training=True):
        self.reuse=False
        with tf.name_scope('input'):
            self.image_input=tf.placeholder(tf.float32,shape=[None,None,None,3],name='image_input')
            net=self.image_input

            with slim.arg_scope([slim.separable_conv2d],
            depth_multiplier=1,
            normalizer_fn=slim.batch_norm,
            normalizer_params={'is_training':is_training},
            activation_fn=tf.nn.relu,weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
            weights_regularizer=slim.l2_regularizer(0.0005)):

                # Down Scaling
                # Block 1
                net=slim.repeat(net, 2, slim.separable_conv2d, num_filt, [3, 3], scope = 'conv1')
                print('en_conv1',net.shape,net.name) # 320x240x3 -> 316x236x32
                self.cnn_layer1=net
                #Down Sampling
                net=slim.max_pool2d(net,[2,2],scope='pool1') 
                print('en_maxpool1',net.shape,net.name) # 316x236x32 -> 158x118x32