Python 索引器错误:在tensorflow中保存模型时,列出索引超出范围

Python 索引器错误:在tensorflow中保存模型时,列出索引超出范围,python,machine-learning,tensorflow,lstm,Python,Machine Learning,Tensorflow,Lstm,有人能帮我吗?我使用tensorflow来训练LSTM网络。训练运行得很好,但是当我想保存模型时,我得到了下面的错误 Step 1, Minibatch Loss= 0.0146, Training Accuracy= 1.000 Step 1, Minibatch Loss= 0.0129, Training Accuracy= 1.000 Optimization Finished! Traceback (most recent call last): File ".\lstm.py",

有人能帮我吗?我使用tensorflow来训练LSTM网络。训练运行得很好,但是当我想保存模型时,我得到了下面的错误

Step 1, Minibatch Loss= 0.0146, Training Accuracy= 1.000
Step 1, Minibatch Loss= 0.0129, Training Accuracy= 1.000
Optimization Finished!
Traceback (most recent call last):
  File ".\lstm.py", line 169, in <module>
    save_path = saver.save(sess, "modelslstm/" + str(time.strftime("%d-%m-%Y-%H-%M-%S")) + ".ckpt")
  File "C:\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1314, in __exit__
    self._default_graph_context_manager.__exit__(exec_type, exec_value, exec_tb)
  File "C:\Python35\lib\contextlib.py", line 66, in __exit__
    next(self.gen)
  File "C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 3815, in get_controller
    if self.stack[-1] is not default:
IndexError: list index out of range
我添加了tf.reset\u default\u graph(),但它不起作用。 请帮我解决我的问题。
谢谢

是否必须使用上下文管理器(
和第1行的
语句?)。上下文管理器似乎很难销毁您的对象。这可能是内置的
\uuuuu退出\uuuu
中的问题。建议您向开发人员提交一份bug报告。

它看起来像是
self。stack
为空,但它正在尝试索引。那是你的密码?怎么会这样?在我的另一个代码中,我使用了相同的方法,但模型保存成功:(self.stack是您代码的一部分吗?显然,您在不允许它变空的时候让它变空了。它发生的原因取决于代码,如果它看起来没有列在这里。您是否进行过任何调试?没有。self.stack是tensorflow核心代码。它不是我代码的一部分。那么您提供给它的数据可能无效。)id.仔细阅读您正在使用的函数的文档,以确保您没有违反任何先决条件。它是有效的!。谢谢。我没想到
with
语句会做到这一点。谢谢请在此处添加一个问题,让开发人员知道:如果答案对您有效,请在答案旁边的绿色勾号上打勾。
with tf.Session() as sess:
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
    # from tensorflow.examples.tutorials.mnist import input_data
    # mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
    # a,b = mnist.train.next_batch(5)
    # print(b)
    # Run the initializer
    sess.run(init)
    saver = tf.train.Saver()
    merged_summary_op = tf.summary.merge_all()
    writer = tf.summary.FileWriter("trainlstm", sess.graph)
    #print(str(data.train.num_examples))
    for step in range(1, training_steps+1):
        for batch_i in range(data.train.num_examples // batch_size):
            batch_x, batch_y,name = data.train.next_batch(batch_size)
            #hasil,cost = encode(batch_x[0][0],"models/25-09-2017-15-25-54.ckpt")
            temp = []
            for batchi in range(batch_size):
                temp2 = []
                for ti in range(timesteps):
                    hasil,cost = encode(batch_x[batchi][ti],"models/25-09-2017-15-25-54.ckpt")
                    hasil = np.reshape(hasil,[num_input])
                    temp2.append(hasil.copy())
                temp.append(temp2.copy())
            batch_x = temp
            # Reshape data to get 28 seq of 28 elements
            #batch_x = batch_x.reshape((batch_size, timesteps, num_input))
            #dlib.hit_enter_to_continue()
            # Run optimization op (backprop)
            sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
            # Calculate batch loss and accuracy
            loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
                                                                 Y: batch_y})
            print("Step " + str(step) + ", Minibatch Loss= " + \
                  "{:.4f}".format(loss) + ", Training Accuracy= " + \
                  "{:.3f}".format(acc))
            f.write("Step " + str(step) + ", Minibatch Loss= " + \
                  "{:.4f}".format(loss) + ", Training Accuracy= " + \
                  "{:.3f}".format(acc)+"\n")

    print("Optimization Finished!")
    save_path = saver.save(sess, "modelslstm/" + str(time.strftime("%d-%m-%Y-%H-%M-%S")) + ".ckpt")
f.close()