Tensorflow 张量流中递归神经网络初始状态的更新

Tensorflow 张量流中递归神经网络初始状态的更新,tensorflow,recurrent-neural-network,Tensorflow,Recurrent Neural Network,目前我有以下代码: init_state = tf.Variable(tf.zeros([batch_partition_length, state_size])) # -> [16, 1024]. final_state = tf.Variable(tf.zeros([batch_partition_length, state_size])) And inside my inference method that is responsible producing the outp

目前我有以下代码:

init_state = tf.Variable(tf.zeros([batch_partition_length, state_size]))    # -> [16, 1024].
final_state = tf.Variable(tf.zeros([batch_partition_length, state_size]))

And inside my inference method that is responsible producing the output, I have the following:

def inference(frames):
    # Note that I write the final_state as a global valriable to avoid the shadowing issue, since it is referenced at the dynamic_rnn line. 
    global final_state
    # ....  Here we have some conv layers and so on... 

    # Now the RNN cell
    with tf.variable_scope('local1') as scope:

        # Move everything into depth so we can perform a single matrix multiply.
        shape_d = pool3.get_shape()
        shape = shape_d[1] * shape_d[2] * shape_d[3]
        # tf_shape = tf.stack(shape)
        tf_shape = 1024

        print("shape:", shape, shape_d[1], shape_d[2], shape_d[3])

        # So note that tf_shape = 1024, this means that we have 1024 features are fed into the network. And
        # the batch size = 1024. Therefore, the aim is to divide the batch_size into num_steps so that
        reshape = tf.reshape(pool3, [-1, tf_shape])
        # Now we need to reshape/divide the batch_size into num_steps so that we would be feeding a sequence
        rnn_inputs = tf.reshape(reshape, [batch_partition_length, step_size, tf_shape])

        print('RNN inputs shape: ', rnn_inputs.get_shape()) # -> (16, 64, 1024).

        cell = tf.contrib.rnn.BasicRNNCell(state_size)
        # note that rnn_outputs are the outputs but not multiplied by W.
        rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, initial_state=init_state)

    # linear Wx + b
    with tf.variable_scope('softmax_linear') as scope:
        weight_softmax = \
            tf.Variable(
                tf.truncated_normal([state_size, n_classes], stddev=1 / state_size, dtype=tf.float32, name='weight_softmax'))
        bias_softmax = tf.constant(0.0, tf.float32, [n_classes], name='bias_softmax')

        softmax_linear = tf.reshape(
            tf.matmul(tf.reshape(rnn_outputs, [-1, state_size]), weight_softmax) + bias_softmax,
            [batch_size, n_classes])

        print('Output shape:', softmax_linear.get_shape())

    return softmax_linear

# Here we define the loss, accuracy and the optimzer. 
# now run the graph:

with tf.Session() as sess:
    _, accuracy_train, loss_train, summary = \
            sess.run([optimizer, accuracy, cost_scalar, merged], feed_dict={x: image_batch,
                                                                            y_valence: valences,
                                                                            confidence_holder: confidences})

    ....
问题:如何才能将存储在最终\u状态中的值分配给初始\u状态?也就是说,在给定另一个变量值的情况下,如何更多地更新该变量值

我使用了以下方法:

tf.assign(init_state, final_state.eval())
img_1d = np.fromstring(img_string, dtype=np.float32)
在运行sess.run命令后的会话下。但是,这是一个错误: 必须使用dtype float为占位符张量“inputs”输入一个值 其中tf.变量:“input”声明如下:

x = tf.placeholder(tf.float32, [None, 112, 112, 3], name='inputs')
通过以下命令从TFR记录中读取图像后完成馈送:

example = tf.train.Example()
example.ParseFromString(string_record)

height = int(example.features.feature['height']
             .int64_list
             .value[0])

width = int(example.features.feature['width']
            .int64_list
            .value[0])

img_string = (example.features.feature['image_raw']
              .bytes_list
              .value[0])

img_1d = np.fromstring(img_string, dtype=np.uint8)
reconstructed_img = img_1d.reshape((height, width, -1)) # Where this is added to the image_batch list, which is fed into the placeholder. 
如果尝试以下操作:

tf.assign(init_state, final_state.eval())
img_1d = np.fromstring(img_string, dtype=np.float32)
这将产生以下错误:

ValueError:无法将大小为9408的数组重塑为形状(112112,newaxis)


非常感谢您的帮助

以下是我到目前为止所犯的错误。在做了一些修改后,我得出了以下结论:

tf.assign(init_state, final_state.eval())
img_1d = np.fromstring(img_string, dtype=np.float32)
  • 我不应该将final_状态创建为tf.Variable。既然tf.nn.dynamic\u rnn将张量返回为ndarray,那么,我不应该在开始时实例化最终的\u状态。我不应该在函数定义下使用全局final_状态

  • 为了将初始状态指定为最终_状态,我使用了:

    tf.assign(intial_state, final_state)
    
  • 一切都会好起来的。 注意:在TysFooSt流中,操作返回Python中的NUMPY数组数据,并作为TysFooSurviv::在C和C++中的张量。
    看一看,了解更多信息

    以下是我到目前为止所犯的错误。在做了一些修改后,我得出了以下结论:

    tf.assign(init_state, final_state.eval())
    
    img_1d = np.fromstring(img_string, dtype=np.float32)
    
  • 我不应该将final_状态创建为tf.Variable。既然tf.nn.dynamic\u rnn将张量返回为ndarray,那么,我不应该在开始时实例化最终的\u状态。我不应该在函数定义下使用全局final_状态

  • 为了将初始状态指定为最终_状态,我使用了:

    tf.assign(intial_state, final_state)
    
  • 一切都会好起来的。 注意:在TysFooSt流中,操作返回Python中的NUMPY数组数据,并作为TysFooSurviv::在C和C++中的张量。 看一看,了解更多信息