Tensorflow 谷歌云ML支持GPU吗?
我正在测试GoogleCloudML,以使用Tensorflow加速我的ML模型 不幸的是,Google Cloud ML似乎非常慢。我的主流级PC至少比Google Cloud ML快10倍 我怀疑它使用GPU,所以我做了一个测试。我用GPU修改了一个样本Tensorflow 谷歌云ML支持GPU吗?,tensorflow,google-cloud-platform,google-cloud-ml,Tensorflow,Google Cloud Platform,Google Cloud Ml,我正在测试GoogleCloudML,以使用Tensorflow加速我的ML模型 不幸的是,Google Cloud ML似乎非常慢。我的主流级PC至少比Google Cloud ML快10倍 我怀疑它使用GPU,所以我做了一个测试。我用GPU修改了一个样本 diff --git a/mnist/trainable/trainer/task.py b/mnist/trainable/trainer/task.py index 9acb349..a64a11d 100644 --- a/mnist
diff --git a/mnist/trainable/trainer/task.py b/mnist/trainable/trainer/task.py
index 9acb349..a64a11d 100644
--- a/mnist/trainable/trainer/task.py
+++ b/mnist/trainable/trainer/task.py
@@ -131,11 +131,12 @@ def run_training():
images_placeholder, labels_placeholder = placeholder_inputs(
FLAGS.batch_size)
- # Build a Graph that computes predictions from the inference model.
- logits = mnist.inference(images_placeholder, FLAGS.hidden1, FLAGS.hidden2)
+ with tf.device("/gpu:0"):
+ # Build a Graph that computes predictions from the inference model.
+ logits = mnist.inference(images_placeholder, FLAGS.hidden1, FLAGS.hidden2)
- # Add to the Graph the Ops for loss calculation.
- loss = mnist.loss(logits, labels_placeholder)
+ # Add to the Graph the Ops for loss calculation.
+ loss = mnist.loss(logits, labels_placeholder)
# Add to the Graph the Ops that calculate and apply gradients.
train_op = mnist.training(loss, FLAGS.learning_rate)
此培训代码可在我的电脑上使用(gcloud beta-ml local train…
),但不能在云中使用。它给出如下错误:
"Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 239, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 43, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 235, in main
run_training()
File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 177, in run_training
sess.run(init)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 766, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 964, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1014, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1034, in _do_call
raise type(e)(node_def, op, message)
InvalidArgumentError: Cannot assign a device to node 'softmax_linear/biases': Could not satisfy explicit device specification '/device:GPU:0' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
Colocation Debug Info:
Colocation group had the following types and devices:
ApplyGradientDescent: CPU
Identity: CPU
Assign: CPU
Variable: CPU
[[Node: softmax_linear/biases = Variable[container="", dtype=DT_FLOAT, shape=[10], shared_name="", _device="/device:GPU:0"]()]]
“回溯(最近一次呼叫最后一次):
文件“/usr/lib/python2.7/runpy.py”,第162行,在运行模块中作为主模块
“\uuuuu main\uuuuuuuuuuuuuuuuuuuuuuuuu”,fname,loader,pkg\u name)
文件“/usr/lib/python2.7/runpy.py”,第72行,在运行代码中
run_globals中的exec代码
文件“/root/.local/lib/python2.7/site packages/trainer/task.py”,第239行,在
tf.app.run()
文件“/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py”,第43行,正在运行
系统出口(主(系统argv[:1]+标志通过))
文件“/root/.local/lib/python2.7/site packages/trainer/task.py”,主目录第235行
跑步训练()
文件“/root/.local/lib/python2.7/site packages/trainer/task.py”,第177行,运行中培训
sess.run(初始化)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/client/session.py”,第766行,正在运行
运行_元数据_ptr)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/client/session.py”,第964行,正在运行
提要(dict字符串、选项、运行元数据)
文件“/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py”,第1014行,运行
目标\u列表、选项、运行\u元数据)
文件“/usr/local/lib/python2.7/dist packages/tensorflow/python/client/session.py”,第1034行,在
提升类型(e)(节点定义、操作、消息)
InvalidArgumentError:无法将设备分配给节点“softmax_linear/biases”:无法满足显式设备规范“/device:GPU:0”,因为在此过程中未注册与该规范匹配的设备;可用设备:/job:localhost/副本:0/任务:0/cpu:0
托管调试信息:
托管组具有以下类型和设备:
ApplyGradientDescent:CPU
标识:CPU
分配:CPU
变量:CPU
[[Node:softmax_linear/biases=Variable[container=”“,dtype=DT_FLOAT,shape=[10],shared_name=“”,_device=“/device:GPU:0”]()]
Google Cloud ML是否支持GPU?GPU现在处于测试阶段,所有Cloud ML客户都可以访问
以下是将GPU与Cloud ML一起使用的方法。如果能让它们正常工作,会不会有什么问题?当我尝试运行指定GPU的作业时,作业就在队列中…````gcloud beta ML作业提交培训GPU作业\U基本GPU \--package path=train \--staging bucket=“${staging\U bucket}“\--模块名称=train.1-multiply\--region=us-central1\--scale tier=BASIC\u GPU``试试region us-east1.wow,效果不错。当文档明确指出central应该工作时,是什么让你直觉地尝试east的?我为谷歌工作,负责Cloud ML中的GPU。由于GPU需求,我们已经迁出us-central1。已经更新了以反映这一点。感谢更新,现在正在东部运行我的作业。