Tensorflow “你扮演什么角色?”;tf.contrib.rnn.OutputProjectionWrapper;“重演”;tf.contrib.rnn.BasicRNNCell“;

Tensorflow “你扮演什么角色?”;tf.contrib.rnn.OutputProjectionWrapper;“重演”;tf.contrib.rnn.BasicRNNCell“;,tensorflow,lstm,recurrent-neural-network,Tensorflow,Lstm,Recurrent Neural Network,在实现BasicRNNCell的许多地方,发现代码使用: tf.contrib.rnn.OutputProjectionWrapper( tf.contrib.rnn.BasicRNNCell(num_units= num_neurons , activation=tf.nn.relu), output_size=num_outputs) “OutputProjectionWrapper”在“BasicRNNCell”上做了什么 根据实现“tf.contrib.rnn.Bas

在实现BasicRNNCell的许多地方,发现代码使用:

tf.contrib.rnn.OutputProjectionWrapper(
    tf.contrib.rnn.BasicRNNCell(num_units= num_neurons , activation=tf.nn.relu), 
    output_size=num_outputs)
“OutputProjectionWrapper”在“BasicRNNCell”上做了什么

根据实现“tf.contrib.rnn.BasicRNNCell”调用函数的代码,它返回rnn的输出。我们可以使用它的调用函数直接进行

# Creating the Model

num_inputs = 1
num_neurons = 100
num_outputs = 1
learning_rate = 0.005
num_train_iterations = 2000
batch_size = 1


tf.reset_default_graph()
x = tf.placeholder(tf.float32, [None, num_time_steps, num_inputs])
y = tf.placeholder(tf.float32, [None, num_time_steps, num_outputs])

# Using Basic RNN Model

cell= tf.contrib.rnn.OutputProjectionWrapper(tf.contrib.rnn.BasicRNNCell(num_units=num_neurons,activation=tf.nn.relu),output_size=num_outputs)

outputs, states = tf.nn.dynamic_rnn(cell, x, dtype=tf.float32)

# MEAN SQUARED ERROR
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdagradOptimizer(learning_rate = learning_rate)
train = optimizer.minimize(loss)

我希望我们可以直接将BasicRNNCell传递给
tf.nn.dynamic\u rnn
,但在此步骤之前,
OutputProjectionWrapper
所做的事情我完全不知道。

是的,您可以直接将BasicRNNCell传递给tf.nn.dynamic\n,或者您可以在将其传递给tf.nn.dynamic\n之前向BasicRNNCell添加一个投影层。OutputProjectionWrapper的作用是在RNN输出后添加一个密集层。

是的,您可以直接将BasicRNNCell传递给tf.nn.dynamic\u RNN,或者您可以在将其传递给tf.nn.dynamic\n之前向BasicRNNCell添加一个投影层。OutputProjectionWrapper的作用是在RNN的输出之后添加一个密集层