Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Machine learning tensorflow中未初始化变量_Machine Learning_Tensorflow_Training Data - Fatal编程技术网

Machine learning tensorflow中未初始化变量

Machine learning tensorflow中未初始化变量,machine-learning,tensorflow,training-data,Machine Learning,Tensorflow,Training Data,我想写一个机器学习程序。其想法是训练一个模型(在q_模型中定义),该模型可以使用RMSProp进行训练。我在这里报告了我的代码的一个非常简化的版本,它不起作用 import tensorflow as tf import numpy as np #-------------------------------------- # Model definition #-------------------------------------- # Let's use a simple nn fo

我想写一个机器学习程序。其想法是训练一个模型(在q_模型中定义),该模型可以使用RMSProp进行训练。我在这里报告了我的代码的一个非常简化的版本,它不起作用

import tensorflow as tf
import numpy as np

#--------------------------------------
# Model definition
#--------------------------------------

# Let's use a simple nn for the Q value function

W = tf.Variable(tf.random_normal([3,10],dtype=tf.float64), name='W')
b = tf.Variable(tf.random_normal([10],dtype=tf.float64), name='b')

def q_model(X,A):
    input = tf.concat((X,A), axis=1)
    return tf.reduce_sum( tf.nn.relu(tf.matmul(input, W) + b), axis=1)

#--------------------------------------
# Model and model initializer
#--------------------------------------

optimizer = tf.train.RMSPropOptimizer(0.9)
init = tf.initialize_all_variables()
sess = tf.Session()

sess.run(init)

#--------------------------------------
# Learning
#--------------------------------------

x = np.matrix(np.random.uniform((0.,0.),(1.,1.), (1000,2)))
a = np.matrix(np.random.uniform((0),(1), 1000)).T
y = np.matrix(np.random.uniform((0),(1), 1000)).T

y_batch , x_batch, a_batch = tf.placeholder("float64",shape=(None,1), name='y'), tf.placeholder("float64",shape=(None,2), name='x'), tf.placeholder("float64",shape=(None,1), name='a')
error = tf.reduce_sum(tf.square(y_batch - q_model(x_batch,a_batch))) / 100.
train = optimizer.minimize(error)

indx = range(1000)
for i in range(100):
    # batches
    np.random.shuffle(indx)
    indx = indx[:100]
    print sess.run({'train':train}, feed_dict={'x:0':x[indx],'a:0':a[indx],'y:0':y[indx]})
错误是:

Traceback (most recent call last):
  File "/home/samuele/Projects/GBFQI/test/tf_test.py", line 45, in <module>
    print sess.run({'train':train}, feed_dict={'x:0':x[indx],'a:0':a[indx],'y:0':y[indx]})
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 789, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 997, in _run
    feed_dict_string, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1132, in _do_run
    target_list, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1152, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value b/RMSProp
     [[Node: RMSProp/update_b/ApplyRMSProp = ApplyRMSProp[T=DT_DOUBLE, _class=["loc:@b"], use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](b, b/RMSProp, b/RMSProp_1, RMSProp/update_b/Cast, RMSProp/update_b/Cast_1, RMSProp/update_b/Cast_2, RMSProp/update_b/Cast_3, gradients/add_grad/tuple/control_dependency_1)]]

Caused by op u'RMSProp/update_b/ApplyRMSProp', defined at:
  File "/home/samuele/Projects/GBFQI/test/tf_test.py", line 38, in <module>
    train = optimizer.minimize(error)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 325, in minimize
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 456, in apply_gradients
    update_ops.append(processor.update_op(self, grad))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 97, in update_op
    return optimizer._apply_dense(g, self._v)  # pylint: disable=protected-access
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/rmsprop.py", line 140, in _apply_dense
    use_locking=self._use_locking).op
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/gen_training_ops.py", line 449, in apply_rms_prop
    use_locking=use_locking, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1269, in __init__
    self._traceback = _extract_stack()

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value b/RMSProp
     [[Node: RMSProp/update_b/ApplyRMSProp = ApplyRMSProp[T=DT_DOUBLE, _class=["loc:@b"], use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](b, b/RMSProp, b/RMSProp_1, RMSProp/update_b/Cast, RMSProp/update_b/Cast_1, RMSProp/update_b/Cast_2, RMSProp/update_b/Cast_3, gradients/add_grad/tuple/control_dependency_1)]]
模型按预期工作,没有产生任何错误

编辑:

我的问题与此不同。我已经知道了

init = tf.initialize_all_variables()
sess = tf.Session()

sess.run(init)

但我不知道它也应该在优化后执行。

您需要将这段代码:

init = tf.initialize_all_variables()
sess = tf.Session()

sess.run(init)
创建这些张量之后:

y_batch , x_batch, a_batch = tf.placeholder("float64",shape=(None,1), name='y'), tf.placeholder("float64",shape=(None,2), name='x'), tf.placeholder("float64",shape=(None,1), name='a')
error = tf.reduce_sum(tf.square(y_batch - q_model(x_batch,a_batch))) / 100.
train = optimizer.minimize(error)

init = tf.initialize_all_variables()
sess = tf.Session()

sess.run(init)
否则,调用
Optimizer.minimize
方法时添加到图形中的隐藏变量将不会初始化

同时,调用print sess.run(q_model(x,a))是有效的,因为这部分图形使用的变量都已初始化

顺便说一句:使用
tf.global\u variables\u初始值设定项,而不是
tf.initialize\u all\u variables

编辑:

要执行选择性初始化,可以执行以下操作:

with tf.variable_scope("to_be_initialised"):
    train = optimizer.minimize(error)

sess.run(tf.variables_initializer(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='to_be_initialised')))

是的,但实际上我不想在这一点上初始化变量的原因是我使用了具有不同X,A的q_模型(X,A)(例如用另一个模型代替A)。因此,如果每次我改变X和A,我都要初始化变量,我会失去W和b的值。虽然我想保留它们,但有没有一种方法可以初始化优化器使用的隐藏变量。最小化???
X
a
是占位符,因此它们被设计为在每次调用
sess时都会被更改。运行
时不修改
W
b
的值。如果您只调用一次
init
op,
W
b
将保留它们的值(当然会根据培训更新)。不,可能我没有很好地解释这种情况。以q_模型(X,A)的定义为例。在完成了q_模型(x,a)的训练,其中x,a是占位符之后,我想做q_模型(x,pi_模型(x))的训练,其中pi_模型是另一个张量流模型(为了简单起见,我没有在这里报告)。我想我解决了这个问题。我将在开始时创建不同的优化器并初始化它们,然后运行代码。谢谢
with tf.variable_scope("to_be_initialised"):
    train = optimizer.minimize(error)

sess.run(tf.variables_initializer(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='to_be_initialised')))