Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/358.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 分布式Tensorflow,Master在培训时卡住,工人在使用SyncReplicasOptimizer和MonitoredTrainingSession时不开始培训?_Python_Tensorflow_Distributed Computing - Fatal编程技术网

Python 分布式Tensorflow,Master在培训时卡住,工人在使用SyncReplicasOptimizer和MonitoredTrainingSession时不开始培训?

Python 分布式Tensorflow,Master在培训时卡住,工人在使用SyncReplicasOptimizer和MonitoredTrainingSession时不开始培训?,python,tensorflow,distributed-computing,Python,Tensorflow,Distributed Computing,我正在尝试使用SyncReplicaOptimizer和MonitoredTraining会话在分布式tensorflow中编写同步培训代码 我面临的问题是,在经过一些步骤之后,师傅暂停了培训,没有一个工人开始培训。以前有人遇到过这种情况吗 这是我写的代码。从张量流记录中读取数据。我遵循了tensorflow网站上描述的确切方式 def build(self): self.modelObj = Model(self.imagesize, self.targetSize) self

我正在尝试使用SyncReplicaOptimizer和MonitoredTraining会话在分布式tensorflow中编写同步培训代码

我面临的问题是,在经过一些步骤之后,师傅暂停了培训,没有一个工人开始培训。以前有人遇到过这种情况吗

这是我写的代码。从张量流记录中读取数据。我遵循了tensorflow网站上描述的确切方式

def build(self):
    self.modelObj = Model(self.imagesize, self.targetSize)
    self.modelObj.model()
    self.global_step = tf.contrib.framework.get_or_create_global_step()
    self.opt = tf.train.AdamOptimizer(self.learningrate)
    if self.syncTraining:
        self.trainer = tf.train.SyncReplicasOptimizer(self.opt,replicas_to_aggregate=self.num_workers,total_num_replicas=self.num_workers)
    else:
        self.trainer = self.opt
    self.trainstep = self.trainer.minimize(self.modelObj.loss, global_step=self.global_step)
    self.saver = tf.train.Saver(max_to_keep=1)
    self.summary_op = tf.summary.merge_all()
    self.init_op = tf.global_variables_initializer()
    if self.syncTraining:
        self.sync_replicas_hook = self.trainer.make_session_run_hook(is_chief = (self.task_index==0))


def train(self):
    if self.syncTraining:



        with tf.train.MonitoredTrainingSession(master=self.server.target,
                                               is_chief=(self.task_index==0),
                                               checkpoint_dir=self.logdir,
                                               hooks=[self.sync_replicas_hook]) as self.session:
            step = 0
            try:
                while not self.session.should_stop():
                    # training

                    [trainx, trainy_] = self.session.run([self.trainx, self.trainy_])
                    feed = {self.modelObj.x: trainx, self.modelObj.y_: trainy_,
                            self.modelObj.batch: self.batch_size, self.modelObj.keep_prob: 0.7}
                    _, trainloss = self.session.run([self.trainstep, self.modelObj.loss], feed_dict=feed)

                    print("step: %d, training loss %f" % (step, trainloss))

                    step += 1

            except tf.errors.OutOfRangeError:
                print('training finished, number of epochs reached')
找到了解决办法

通过添加

time.sleep(5)
同样,对parameter server执行同样的操作,并尝试在CPU而不是GPU上运行parameter server。

找到了解决方案

通过添加

time.sleep(5)

同样,对parameter server执行同样的操作,并尝试在CPU上而不是GPU上运行parameter server。

是的,ps不应放在GPU上。 我也有这个问题。我通过在tf.train.replica\u device\u setter中显式声明ps\u device=“/job:ps/cpu:0”来解决这个问题。 整个代码如下所示:

with tf.device(tf.train.replica_device_setter(
                                 ps_device="/job:ps/cpu:0",
                                 worker_device="/job:worker/task:%d" % (worker_index),
                                 cluster=cluster_spec)):

非常感谢@prateek agrawal

是的,ps不应该放在gpu上。 我也有这个问题。我通过在tf.train.replica\u device\u setter中显式声明ps\u device=“/job:ps/cpu:0”来解决这个问题。 整个代码如下所示:

with tf.device(tf.train.replica_device_setter(
                                 ps_device="/job:ps/cpu:0",
                                 worker_device="/job:worker/task:%d" % (worker_index),
                                 cluster=cluster_spec)):
非常感谢@prateek agrawal