Machine learning tensorflow分布seq2seq永远卡住

Machine learning tensorflow分布seq2seq永远卡住,machine-learning,tensorflow,distributed,deep-learning,Machine Learning,Tensorflow,Distributed,Deep Learning,我正在尝试在Tensorflow中启动分布式seq2seq模型。这是原始的单流程seq2seq模型。 我按照tensorflow分布式教程设置了一个集群(1ps,3worker) 但是所有的工作人员都永远被卡住,并输出相同的池日志信息: start running session I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 7623 get requests, put_count

我正在尝试在Tensorflow中启动分布式seq2seq模型。这是原始的单流程seq2seq模型。 我按照tensorflow分布式教程设置了一个集群(1ps,3worker)

但是所有的工作人员都永远被卡住,并输出相同的池日志信息:

start running session
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 7623 get requests, put_count=3649 evicted_count=1000 eviction_rate=0.274048 and unsatisfied allocation rate=0.665617
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 100 to 110
这是translate.py的集群设置:

  ps_hosts = ["9.91.9.129:2222"]
  worker_hosts = ["9.91.9.130:2223", "9.91.9.130:2224", "9.91.9.130:2225"]
  #worker_hosts = ["9.91.9.130:2223"]

  cluster = tf.train.ClusterSpec({"ps":ps_hosts, "worker":worker_hosts})
  server = tf.train.Server(cluster,
                            job_name=FLAGS.job_name,
                            task_index=FLAGS.task_index)
  if FLAGS.job_name == "ps":
        server.join()
  elif FLAGS.job_name == "worker":
      # Worker server 
      is_chief = (FLAGS.task_index == 0)      
      gpu_num = FLAGS.task_index
      with tf.Graph().as_default():
        with tf.device(tf.train.replica_device_setter(cluster=cluster,
            worker_device="/job:worker/task:%d/gpu:%d" % (FLAGS.task_index, gpu_num))):
# Gradients and SGD update operation for training the model.
params = tf.trainable_variables()
if not forward_only:
  self.gradient_norms = []
  self.updates = []
  opt = tf.train.GradientDescentOptimizer(self.learning_rate)
  opt = tf.train.SyncReplicasOptimizer(
    opt,
    replicas_to_aggregate=num_workers,
    replica_id=task_index,
    total_num_replicas=num_workers)      

  for b in xrange(len(buckets)):
    gradients = tf.gradients(self.losses[b], params)
    clipped_gradients, norm = tf.clip_by_global_norm(gradients,
                                                     max_gradient_norm)
    self.gradient_norms.append(norm)
    self.updates.append(opt.apply_gradients(
          zip(clipped_gradients, params), global_step=self.global_step))


self.init_tokens_op = opt.get_init_tokens_op
self.chief_queue_runners = [opt.get_chief_queue_runner]
self.saver = tf.train.Saver(tf.all_variables())
我使用了tf.train.SyncReplicasOptimizer来实现SyncTraining

这是我的seq2seq_model.py的一部分:

  ps_hosts = ["9.91.9.129:2222"]
  worker_hosts = ["9.91.9.130:2223", "9.91.9.130:2224", "9.91.9.130:2225"]
  #worker_hosts = ["9.91.9.130:2223"]

  cluster = tf.train.ClusterSpec({"ps":ps_hosts, "worker":worker_hosts})
  server = tf.train.Server(cluster,
                            job_name=FLAGS.job_name,
                            task_index=FLAGS.task_index)
  if FLAGS.job_name == "ps":
        server.join()
  elif FLAGS.job_name == "worker":
      # Worker server 
      is_chief = (FLAGS.task_index == 0)      
      gpu_num = FLAGS.task_index
      with tf.Graph().as_default():
        with tf.device(tf.train.replica_device_setter(cluster=cluster,
            worker_device="/job:worker/task:%d/gpu:%d" % (FLAGS.task_index, gpu_num))):
# Gradients and SGD update operation for training the model.
params = tf.trainable_variables()
if not forward_only:
  self.gradient_norms = []
  self.updates = []
  opt = tf.train.GradientDescentOptimizer(self.learning_rate)
  opt = tf.train.SyncReplicasOptimizer(
    opt,
    replicas_to_aggregate=num_workers,
    replica_id=task_index,
    total_num_replicas=num_workers)      

  for b in xrange(len(buckets)):
    gradients = tf.gradients(self.losses[b], params)
    clipped_gradients, norm = tf.clip_by_global_norm(gradients,
                                                     max_gradient_norm)
    self.gradient_norms.append(norm)
    self.updates.append(opt.apply_gradients(
          zip(clipped_gradients, params), global_step=self.global_step))


self.init_tokens_op = opt.get_init_tokens_op
self.chief_queue_runners = [opt.get_chief_queue_runner]
self.saver = tf.train.Saver(tf.all_variables())

这是我完整的python代码[here]

看来Tensorflow的人还没有准备好分享在集群上运行代码的经验。到目前为止,只有在源代码中才能找到全面的文档

从版本0.11开始,根据SyncReplicasOptimizer.py,您必须在SyncReplicasOptimizer构建后运行此命令:

init_token_op = optimizer.get_init_tokens_op()
chief_queue_runner = optimizer.get_chief_queue_runner()
然后在与主管构建会话后运行此命令:

  if is_chief:
    sess.run(init_token_op)
    sv.start_queue_runners(sess, [chief_queue_runner])
由于0.12引入了SyncReplicasOptimizerV2,此代码可能不够,因此,请参考您使用的版本的源代码