Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Machine learning tensorflow InvalidargumeInterror:“;必须为占位符张量“输入值”;_Machine Learning_Tensorflow_Deep Learning - Fatal编程技术网

Machine learning tensorflow InvalidargumeInterror:“;必须为占位符张量“输入值”;

Machine learning tensorflow InvalidargumeInterror:“;必须为占位符张量“输入值”;,machine-learning,tensorflow,deep-learning,Machine Learning,Tensorflow,Deep Learning,这是一个简单的tensorflow代码,它创建了两个具有共享参数但输入不同(占位符)的模型 运行“train_step”节点时,出现以下错误: InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'states_test' with dtype float and shape [?,64] [[Node: states_test = Pl

这是一个简单的tensorflow代码,它创建了两个具有共享参数但输入不同(占位符)的模型

运行“train_step”节点时,出现以下错误:

    InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'states_test' with dtype float and shape [?,64]
         [[Node: states_test = Placeholder[dtype=DT_FLOAT, shape=[?,64], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
         [[Node: mean_squared_error/value/_77 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_2678_mean_squared_error/value", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
虽然train_step节点未连接到“states_test”占位符,id也不需要它运行!,那我为什么要喂它呢

但是,如果我更改了模型函数,以便在优化器之后创建第二个网络,那么代码运行时不会出现任何错误!(像这样):

为什么即使这两个代码导致相同的tensorflow图,也会发生这种情况?
有人能解释这种行为吗?

问题在于使用批处理规范,即以下几行:

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
            self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)
请注意,您有两个图,它们共享变量—培训图和测试图。首先创建这两个,然后创建乐观者。但是,您可以对
额外更新操作
使用控制依赖项,它是所有更新操作的集合。问题是-每个批处理规范都会创建更新操作(以跟踪平均值/方差)-在训练图中有一个,在测试图中有一个。因此,通过请求控制相关性,您告诉TF,您的列车运行可以执行当且仅当在列车和测试图中执行批处理规范统计时。这需要给测试样本喂食。那你该怎么办?将额外的更新操作更改为仅包括列车图更新(通过名称范围、手动筛选或任何其他方法),或在构建测试图之前调用tf.get\u集合,以便:

   def model(self):
        self.out = self.network(self.x, False)
        # Note that at this point we only gather train batch_norms
        extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 

        self.out_test = self.network(self.x_test, True)

        self.loss = tf.losses.mean_squared_error(self.out, self.y)
        with tf.control_dependencies(extra_update_ops):
            self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)
您可能还希望将reuse=True传递给您的批处理规范

extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(extra_update_ops):
            self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)
   def model(self):
        self.out = self.network(self.x, False)
        # Note that at this point we only gather train batch_norms
        extra_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) 

        self.out_test = self.network(self.x_test, True)

        self.loss = tf.losses.mean_squared_error(self.out, self.y)
        with tf.control_dependencies(extra_update_ops):
            self.train_step = tf.train.RMSPropOptimizer(.00002).minimize(self.loss)