Cuda tensorflow中的Nvidia设备错误
为了测试我的tensorflow安装,我使用tensorflow存储库中提供的mnist示例,但是当我执行convolutional.py脚本时,我有以下输出:Cuda tensorflow中的Nvidia设备错误,cuda,tensorflow,Cuda,Tensorflow,为了测试我的tensorflow安装,我使用tensorflow存储库中提供的mnist示例,但是当我执行convolutional.py脚本时,我有以下输出: I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.so.8.0 locally I tensorflow/stream_executor/dso_loader.cc:125] successfull
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.so.8.0 locally
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 980 Ti
major: 5 minor: 2 memoryClockRate (GHz) 1.2405
pciBusID 0000:03:00.0
Total memory: 5.93GiB
Free memory: 5.83GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x29020c0
E tensorflow/core/common_runtime/direct_session.cc:137] Internal: failed initializing StreamExecutor for CUDA device ordinal 1: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_INVALID_DEVICE
Traceback (most recent call last):
File "convolutional.py", line 339, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "convolutional.py", line 284, in main
with tf.Session() as sess:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1187, in __init__
super(Session, self).__init__(target, graph, config=config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 552, in __init__
self._session = tf_session.TF_NewDeprecatedSession(opts, status)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InternalError: Failed to create session.
那么我的结论是cuda安装正确。但我不知道这里发生了什么。如果有人能帮助我,我将不胜感激
有关更多信息,请参阅我的gpu配置:
Tue Jan 31 19:42:10 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.57 Driver Version: 367.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 560 Ti Off | 0000:01:00.0 N/A | N/A |
| 25% 45C P0 N/A / N/A | 463MiB / 958MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 980 Ti Off | 0000:03:00.0 Off | N/A |
| 0% 31C P8 13W / 280W | 1MiB / 6077MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
编辑:
两张nvidia卡具有相同的物理id是正常的吗
sudo lshw -C "display"
*-display
description: VGA compatible controller
product: GM200 [GeForce GTX 980 Ti]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:03:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:50 memory:f9000000-f9ffffff memory:b0000000-bfffffff memory:c0000000-c1ffffff ioport:d000(size=128) memory:fa000000-fa07ffff
*-display
description: VGA compatible controller
product: GF114 [GeForce GTX 560 Ti]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:45 memory:f6000000-f7ffffff memory:c8000000-cfffffff memory:d0000000-d3ffffff ioport:e000(size=128) memory:f8000000-f807ffff
您显示的输出中的要点如下:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 980 Ti
major: 5 minor: 2 memoryClockRate (GHz) 1.2405
pciBusID 0000:03:00.0
Total memory: 5.93GiB
Free memory: 5.83GiB
i、 e.所需的计算设备被枚举为设备0和
E tensorflow/core/common_runtime/direct_session.cc:137] Internal: failed initializing StreamExecutor for CUDA device ordinal 1: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_INVALID_DEVICE
i、 e.产生错误的计算设备被枚举为设备1。设备1是您的显示GPU,不能用于Tensorflow中的计算。如果您使用
nvidia smi
将该设备标记为禁止计算,或使用CUDA_VISIBLE_DEVICES
环境变量仅使您的计算设备对CUDA可见,该错误可能会消失。我在尝试运行作为的一部分的classify\u image.py
脚本时遇到了类似的错误。因为我已经有了一个正在运行的Python会话(elpy),在该会话中我运行了一些TensorFlow代码,所以GPU分配在那里,因此无法用于我试图从shell运行的脚本
退出现有的Python会话解决了错误。要回答您的编辑,它们的物理id不同,但在这两种情况下,它们的物理id为:0。那么它们是否具有相同的物理id?pci总线是不同的。您的编辑完全不正确是总线不同,但物理id没有不同,如果我没有错,则物理id用于选择要使用的gpu。您错了。cuda中的设备枚举不依赖于pci id。使用cuda可视设备的解决方案正在运行,但在我的案例中使用de nvidia smi的解决方案不起作用。我正在使用这个命令:sudo nvidia smi-I$gpuid--compute mode=禁止,但tensorflow it仍在尝试使用显示GPU。但另一个解决方案很有效。谢谢你的帮助。
E tensorflow/core/common_runtime/direct_session.cc:137] Internal: failed initializing StreamExecutor for CUDA device ordinal 1: Internal: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_INVALID_DEVICE