Tensorflow 图表已完成,无法修改

Tensorflow 图表已完成,无法修改,tensorflow,graph,nvidia-digits,Tensorflow,Graph,Nvidia Digits,我的linux终端发生了什么,需要修改的地方在哪里? 我在stackoverflow中找到了一些关于它的答案,但这不是我的问题 具体错误如下: 2017-12-19 05:49:34 [INFO] Starting queue runners (val) Traceback (most recent call last): File "/root/digits/digits/tools/tensorflow/main.py", line 627, in main val_model.start_

我的linux终端发生了什么,需要修改的地方在哪里? 我在stackoverflow中找到了一些关于它的答案,但这不是我的问题

具体错误如下:

2017-12-19 05:49:34 [INFO] Starting queue runners (val)
Traceback (most recent call last):
File "/root/digits/digits/tools/tensorflow/main.py", line 627, in main
val_model.start_queue_runners(sess)
File "/root/digits/digits/tools/tensorflow/model.py", line 208, in 
start_queue_runners
tf.add_to_collection(digits.GraphKeys.QUEUE_RUNNERS, qr)
File "/usr/local/lib/python2.7/dist-
packages/tensorflow/python/framework/ops.py", line 4
248, in add_to_collection    get_default_graph().add_to_collection(name, 
value)
File "/usr/local/lib/python2.7/dist-
packages/tensorflow/python/framework/ops.py", line 2
792, in add_to_collection    self._check_not_finalized()
File "/usr/local/lib/python2.7/dist-
packages/tensorflow/python/framework/ops.py", line 2
181, in _check_not_finalized    raise RuntimeError("Graph is finalized and 
cannot be modified.")
RuntimeError: Graph is finalized and cannot be modified.

无需修改图形即可启动队列运行程序。现在还不清楚您在这里要做什么,但以下代码是等效的,不会修改图形:

def start_queue_runners(self, sess):
    logging.info('Starting queue runners (%s)', self.stage)

    queue_runners = tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS, 
                                      scope=self.stage+'.*')

    self.queue_coord = tf.train.Coordinator()

    # Start the queue runner threads directly without adding them to a collection.
    for qr in queue_runners:
        if self.stage in qr.name:
            qr.create_threads(sess, coord=self.queue_coord)

    logging.info('Queue runners started (%s)', self.stage)

thx,我不想修改默认图形。我将数字tf single machine更改为multi machine for distributed tensorflow,使用方法sv=tf.train.Supervisor(),将sv.prepare_或_wait_for_session(server.target)作为sess:,但nvidia digits single machine使用方法sess=tf.session(),然后在single中,使用方法val_model.start_queue_runners(sess)Validation(sess,val_model,0),因此,我将single更改为distributed use supervisor(),并且(使用sv.prepare_或wait_for_session(server.target)作为sess:),显示错误,因此我应该修改有关sess的所有结构?对不起,我是新手,我是否应该将NVIDIA数字的单机tensorflow代码重新构造为分布式tensorflow?或者将tf.session()作为sess的主tf代码以数字形式修改为直接作为sess的MonitoredTrainingSession()?我认为tensorflow.org中的分布式tf脚本太简单,无法实现一个稍微复杂的环境。
def start_queue_runners(self, sess):
    logging.info('Starting queue runners (%s)', self.stage)

    queue_runners = tf.get_collection(tf.GraphKeys.QUEUE_RUNNERS, 
                                      scope=self.stage+'.*')

    self.queue_coord = tf.train.Coordinator()

    # Start the queue runner threads directly without adding them to a collection.
    for qr in queue_runners:
        if self.stage in qr.name:
            qr.create_threads(sess, coord=self.queue_coord)

    logging.info('Queue runners started (%s)', self.stage)