Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/366.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 分布式tensorflow:非主工卡在启动会话中_Python_Tensorflow_Distributed - Fatal编程技术网

Python 分布式tensorflow:非主工卡在启动会话中

Python 分布式tensorflow:非主工卡在启动会话中,python,tensorflow,distributed,Python,Tensorflow,Distributed,我有两个工人和一个参数服务器。首席工作人员工作得很好,但非首席工作人员仍停留在这一行代码上 with sv.managed_session(server.target, config=config) as sess, sess.as_default(): 并产生如下产出: [2017-01-06 21:24:40,954] Starting session. If this hangs, we're mostly likely waiting to connect to the para

我有两个工人和一个参数服务器。首席工作人员工作得很好,但非首席工作人员仍停留在这一行代码上

with sv.managed_session(server.target, config=config) as sess, sess.as_default():
并产生如下产出:

[2017-01-06 21:24:40,954] Starting session. If this hangs, we're mostly    likely waiting to connect to the parameter server. One common cause is that the parameter server DNS name isn't resolving yet, or is misspecified.
I tensorflow/core/distributed_runtime/master_session.cc:928] Start master session 67667d6cd148265a with config:
device_filters: "/job:ps"
device_filters: "/job:worker/task:1/cpu:0"

I tensorflow/core/distributed_runtime/master_session.cc:928] Start master session 1c4e0742ba99e5ea with config:
device_filters: "/job:ps"
device_filters: "/job:worker/task:1/cpu:0"

I tensorflow/core/distributed_runtime/master_session.cc:928] Start  master session 9575940608a24fcd with config:
device_filters: "/job:ps"
device_filters: "/job:worker/task:1/cpu:0"
def init_fn(ses):
    logger.info("Initializing all parameters.")
    ses.run(init_all_op)

config = tf.ConfigProto(device_filters=["/job:ps", "/job:worker/task:{}/cpu:0".format(args.worker_id)]) # refer to worker id
logdir = os.path.join(args.log_dir, 'train')
summary_writer = tf.train.SummaryWriter(logdir + "_%d" % args.worker_id)
sv = tf.train.Supervisor(is_chief=(args.worker_id == 0),
                         logdir=logdir,
                         saver=saver,
                         summary_op=None,
                         init_op=init_op,  # Defaults to an Operation that initializes all variables
                         init_fn=init_fn,
                         summary_writer=summary_writer,
                         ready_op=tf.report_uninitialized_variables(variables_to_save),
                         global_step=trainer.global_step[target_task],
                         save_model_secs=30,
                         save_summaries_secs=30)
它会一次又一次地启动主会话

列车长设置如下:

[2017-01-06 21:24:40,954] Starting session. If this hangs, we're mostly    likely waiting to connect to the parameter server. One common cause is that the parameter server DNS name isn't resolving yet, or is misspecified.
I tensorflow/core/distributed_runtime/master_session.cc:928] Start master session 67667d6cd148265a with config:
device_filters: "/job:ps"
device_filters: "/job:worker/task:1/cpu:0"

I tensorflow/core/distributed_runtime/master_session.cc:928] Start master session 1c4e0742ba99e5ea with config:
device_filters: "/job:ps"
device_filters: "/job:worker/task:1/cpu:0"

I tensorflow/core/distributed_runtime/master_session.cc:928] Start  master session 9575940608a24fcd with config:
device_filters: "/job:ps"
device_filters: "/job:worker/task:1/cpu:0"
def init_fn(ses):
    logger.info("Initializing all parameters.")
    ses.run(init_all_op)

config = tf.ConfigProto(device_filters=["/job:ps", "/job:worker/task:{}/cpu:0".format(args.worker_id)]) # refer to worker id
logdir = os.path.join(args.log_dir, 'train')
summary_writer = tf.train.SummaryWriter(logdir + "_%d" % args.worker_id)
sv = tf.train.Supervisor(is_chief=(args.worker_id == 0),
                         logdir=logdir,
                         saver=saver,
                         summary_op=None,
                         init_op=init_op,  # Defaults to an Operation that initializes all variables
                         init_fn=init_fn,
                         summary_writer=summary_writer,
                         ready_op=tf.report_uninitialized_variables(variables_to_save),
                         global_step=trainer.global_step[target_task],
                         save_model_secs=30,
                         save_summaries_secs=30)
有什么建议吗?非常感谢

更新 引用tensorflow文档:

在主要任务中,主管的工作方式与上面的第一个示例完全相同。在其他任务中,sv.managed_session()等待模型初始化,然后将会话返回到训练代码。非主任务依赖于初始化模型的主任务

如果其中一个任务崩溃并重新启动,managed_session()将检查模型是否已初始化。如果是,它只创建一个会话并将其返回到正常进行的培训代码。如果需要初始化模型,则主要任务负责重新初始化模型;其他任务只是等待模型被初始化


我在这里使用的分布式方法是在图之间复制,原因是我在workers中定义了两个图,其中一个是不完整的。我试着用非首席工人来完成这个图表。因此,它会在托管会话()时崩溃并重新启动。当我在“首席工作人员”中完整地定义两个图表时,这个问题就得到了解决。

您是不是在一次又一次地执行
sv.managed\u session
行?顺便说一句,完整的可复制示例有助于回答这些问题