TensorFlow中RNN的输出序列

TensorFlow中RNN的输出序列,tensorflow,lstm,recurrent-neural-network,Tensorflow,Lstm,Recurrent Neural Network,我有一个简单的脚本,它接收一系列单词,将它们转换为嵌入,并尝试预测句子中的下一个单词。我真正想要的是输出接下来的140个单词。我已经能够做到这一点,每次运行predict 140次,并在列表中添加最新的字符,但这需要很长时间 有没有一种聪明的方法可以从RNN返回一个序列而不是一个单词 with tf.name_scope("placeholders"): x = tf.placeholder(dtype=tf.int32, shape=[None, n_steps]) y =

我有一个简单的脚本,它接收一系列单词,将它们转换为嵌入,并尝试预测句子中的下一个单词。我真正想要的是输出接下来的140个单词。我已经能够做到这一点,每次运行predict 140次,并在列表中添加最新的字符,但这需要很长时间

有没有一种聪明的方法可以从RNN返回一个序列而不是一个单词

 with tf.name_scope("placeholders"):
    x = tf.placeholder(dtype=tf.int32, shape=[None, n_steps])
    y = tf.placeholder(dtype=tf.int32, shape=[None])
    seq_length = tf.placeholder(tf.int32, [None])

# Let's set up the embedding converting words to vectors
with tf.name_scope("embeddings"):
    embeddings = tf.Variable(tf.random_uniform(shape=[vocab_size, embedding_size], minval=-1, maxval=1))
    train_input = tf.nn.embedding_lookup(embeddings, x)
    if is_training:
        train_input = tf.layers.batch_normalization(train_input)

with tf.name_scope("model"):
    #consider if we can
    #1: make stateful model
    #2: predict next N words instead of just the next
    single_cell = tf.contrib.rnn.BasicLSTMCell(num_units=n_hidden)
    lstm_cells = [tf.contrib.rnn.BasicLSTMCell(num_units=n_hidden)
           for _ in range(n_units)]
    #if is_training:
    #    lstm_cells = [tf.contrib.rnn.DropoutWrapper(cell, input_keep_prob=0.2)
    #       for cell in lstm_cells]
    multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)

    outputs, states = tf.nn.dynamic_rnn(multi_cell, train_input, dtype=tf.float32)
    top_layer_h_state = states[-1][1]

    hidden1 = tf.layers.dense(top_layer_h_state, units=n_hidden, activation=tf.nn.relu)
    dropout_1 = tf.layers.dropout(hidden1, rate=0.1, training=is_training)
    logits = tf.layers.dense(dropout_1, units=vocab_size, activation=None)
    predictions = tf.nn.softmax(logits)
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
        labels=y,
        logits=logits)
    loss = tf.reduce_mean(xentropy)