Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/extjs/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
tensorflow多个GPU同时运行?_Tensorflow_Deep Learning_Distributed Computing_Multi Gpu - Fatal编程技术网

tensorflow多个GPU同时运行?

tensorflow多个GPU同时运行?,tensorflow,deep-learning,distributed-computing,multi-gpu,Tensorflow,Deep Learning,Distributed Computing,Multi Gpu,为了了解分布式tensorflow的机制,我使用多GPU编写了一个简单的tensorflow测试代码 def cv_data(SEED): np.random.seed(SEED) return np.random.rand(5,2,2) def test(data): for i in range(5): with tf.device('/gpu:%d' %i ): with tf.name_scope('cv%d' %i):

为了了解分布式tensorflow的机制,我使用多GPU编写了一个简单的tensorflow测试代码

def cv_data(SEED):
    np.random.seed(SEED)
    return np.random.rand(5,2,2)


def test(data):
    for i in range(5):
        with tf.device('/gpu:%d' %i ):
            with tf.name_scope('cv%d' %i):
                x = tf.placeholder(tf.float32,[2,2],name='x')
                y = tf.matmul(x,x)
    init = tf.initialize_all_variables()
    sess = tf.Session()
    with sess as sess:
        writer=tf.summary.FileWriter("test_graph",sess.graph)
        sess.run(init)
        print("y is ")
        print(sess.run(y,feed_dict={'cv0/x:0':np.ones((2,2)),'cv1/x:0':2*np.ones((2,2)),'cv2/x:0':3*np.ones((2,2)),'cv3/x:0':4*np.ones((2,2)),'cv4/x:0':5*np.ones((2,2))))
        #tf.train.Saver.save(sess,"./model")
        writer.close()

但是sess.run()只执行/gpu:4的图形,如何让所有gpu同时运行?

您可以建立一个Python操作列表,并将所有操作传递给
sess.run
。或者您可以聚合结果(例如,
tf.add\n
)并运行该操作


无论哪种方式,您都可能希望在循环之外有一个占位符,这意味着您将输入值并将其复制到所有设备。

谢谢,这就是您的意思吗<代码>sess.run(['cv2/MatMul:0','cv1/MatMul:0',feed_dict={'cv0/x:0':np.one((2,2)),'cv1/x:0':2*np.one((2,2)),'cv2/x:0':3*np.one((2,2))#同时运行