Python 设置PyCharm远程conda解释器

Python 设置PyCharm远程conda解释器,python,amazon-ec2,pycharm,anaconda,conda,Python,Amazon Ec2,Pycharm,Anaconda,Conda,我正在尝试在MacOS Mojave 2019.1.2 Pro上设置远程conda解释器,但无法使其工作。我现有的远程conda环境(v4.5.12)运行在Ubuntu16EC2机器上,从 我试过了,并把它指向:/home/ubuntu/anaconda3/envs/tensorflow\u p36/bin/python,这是我的conda环境。然后,我尝试在这个解释器上运行一个简单的Tensorflow GPU测试,得到了以下消息,强烈表明环境没有被激活:(服务器的IP地址和公司名称被故意混淆

我正在尝试在MacOS Mojave 2019.1.2 Pro上设置远程conda解释器,但无法使其工作。我现有的远程conda环境(v4.5.12)运行在Ubuntu16EC2机器上,从

我试过了,并把它指向:
/home/ubuntu/anaconda3/envs/tensorflow\u p36/bin/python
,这是我的conda环境。然后,我尝试在这个解释器上运行一个简单的Tensorflow GPU测试,得到了以下消息,强烈表明环境没有被激活:(服务器的IP地址和公司名称被故意混淆)

ssh://ubuntu@xx.xx.xx.xx:22/home/ubuntu/anaconda3/envs/tensorflow_p36/bin/python-u/home/ubuntu/company/DeepLearning_copy/apps/test_gpu.py
回溯(最近一次呼叫最后一次):
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site packages/tensorflow/python/pywrap_tensorflow.py”,第58行
从tensorflow.python.pywrap\u tensorflow\u内部导入*
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site packages/tensorflow/python/pywrap_tensorflow_internal.py”,第28行
_pywrap\u tensorflow\u internal=swig\u import\u helper()
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site packages/tensorflow/python/pywrap_tensorflow_internal.py”,第24行,在swig\u import\u helper中
_mod=imp.load_模块(“pywrap_tensorflow_internal”,fp,路径名,描述)
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/imp.py”,第243行,在加载模块中
返回加载动态(名称、文件名、文件)
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/imp.py”,第343行,在load_dynamic中
返回负载(规格)
ImportError:libcublas.so.10.0:无法打开共享对象文件:没有此类文件或目录
在处理上述异常期间,发生了另一个异常:
回溯(最近一次呼叫最后一次):
文件“/home/ubuntu/company/DeepLearning\u copy/apps/test\u gpu.py”,第1行,在
导入tensorflow作为tf
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site packages/tensorflow/__init__.py”,第24行
从tensorflow.python导入pywrapu tensorflow 35; pylint:disable=未使用的导入
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site packages/tensorflow/python/_init__.py”,第49行
从tensorflow.python导入pywrap\u tensorflow
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site packages/tensorflow/python/pywrap_tensorflow.py”,第74行
提高效率(msg)
ImportError:回溯(最近一次呼叫上次):
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site packages/tensorflow/python/pywrap_tensorflow.py”,第58行
从tensorflow.python.pywrap\u tensorflow\u内部导入*
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site packages/tensorflow/python/pywrap_tensorflow_internal.py”,第28行
_pywrap\u tensorflow\u internal=swig\u import\u helper()
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site packages/tensorflow/python/pywrap_tensorflow_internal.py”,第24行,在swig\u import\u helper中
_mod=imp.load_模块(“pywrap_tensorflow_internal”,fp,路径名,描述)
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/imp.py”,第243行,在加载模块中
返回加载动态(名称、文件名、文件)
文件“/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/imp.py”,第343行,在load_dynamic中
返回负载(规格)
ImportError:libcublas.so.10.0:无法打开共享对象文件:没有此类文件或目录
未能加载本机TensorFlow运行时。
看见https://www.tensorflow.org/install/errors
因为一些常见的原因和解决方案。包括整个堆栈跟踪
以上是请求帮助时的错误消息。
进程已完成,退出代码为1
当下载到服务器时,代码运行得非常完美,运行
conda activate tensorflow_p36
,然后运行
python gpu_test.py

我希望有任何变通办法,允许使用现有的远程conda环境进行远程调试。 在此期间,我已打开,并与


编辑:请查看

中的潜在解决方法。我认为这是cuda错误。Cuda配置不正确。您是否正确使用tensorflow gpu???

OP,可能是某些人对您的环境所做的事情导致了CUDA安装的混乱,就像其他人提到的那样

我刚刚在AWS上提供了一个新的深度学习AMI实例-这对您来说是可行的选择吗

无论如何,在
ssh
ing到(新配置的)服务器之后,我执行了以下步骤:

初始激活

$ conda activate tensorflow_p36
WARNING: First activation might take some time (1+ min).
Installing TensorFlow optimized for your Amazon EC2 instance......
Env where framework will be re-installed: tensorflow_p36
Instance p2.xlarge is identified as a GPU instance, removing tensorflow-serving-cpu
Installation complete.
场景1:在
tensorflow\u p36
conda环境中运行GPU测试:

这样做是为了确保Tensorflow按照OP的场景正常工作

$ python
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) 
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> # Creates a graph.
... a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
>>> b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
>>> c = tf.matmul(a, b)
>>> # Creates a session with log_device_placement set to True.
... sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

Device mapping:
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7
>>> # Runs the op.
... print(sess.run(c))
MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0
a: (Const): /job:localhost/replica:0/task:0/device:GPU:0
b: (Const): /job:localhost/replica:0/task:0/device:GPU:0
[[22. 28.]
 [49. 64.]]
场景2:停用环境,并像在环境中一样调用相同的
python
可执行文件。

这应该与配置远程解释器以使用特定的
python
解释器相同。请注意,与上面的情况相比,
sess=tf.Session(…)
之后的输出要多得多,但一切运行正常

$ conda deactivate
$ /home/ubuntu/anaconda3/envs/tensorflow_p36/bin/python

Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) 
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> # Creates a graph.
... a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
>>> b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
>>> c = tf.matmul(a, b)
>>> # Creates a session with log_device_placement set to True.
... sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
2019-05-31 07:14:23.840474: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-05-31 07:14:23.841300: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55ec160ca020 executing computations on platform CUDA. Devices:
2019-05-31 07:14:23.841334: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): Tesla K80, Compute Capability 3.7
2019-05-31 07:14:23.843647: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300060000 Hz
2019-05-31 07:14:23.843845: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55ec16131af0 executing computations on platform Host. Devices:
2019-05-31 07:14:23.843870: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
2019-05-31 07:14:23.844965: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: 
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:1e.0
totalMemory: 11.17GiB freeMemory: 11.11GiB
2019-05-31 07:14:23.844992: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-31 07:14:23.845991: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-31 07:14:23.846013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
2019-05-31 07:14:23.846020: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
2019-05-31 07:14:23.846577: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10805 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7)
Device mapping:
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7
2019-05-31 07:14:23.847176: I tensorflow/core/common_runtime/direct_session.cc:317] Device mapping:
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7

>>> # Runs the op.
... print(sess.run(c))
MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0
2019-05-31 07:14:25.478310: I tensorflow/core/common_runtime/placer.cc:1059] MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
a: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2019-05-31 07:14:25.478383: I tensorflow/core/common_runtime/placer.cc:1059] a: (Const)/job:localhost/replica:0/task:0/device:GPU:0
b: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2019-05-31 07:14:25.478413: I tensorflow/core/common_runtime/placer.cc:1059] b: (Const)/job:localhost/replica:0/task:0/device:GPU:0
[[22. 28.]
[49. 64.]]
$conda停用
$/home/ubuntu/anaconda3/envs/tensorflow_p36/bin/python
Python 3.6.5 | Anaconda,Inc.|(默认,2018年4月29日,16:14:56)
linux上的[GCC 7.2.0]
有关详细信息,请键入“帮助”、“版权”、“信用证”或“许可证”。
>>>导入tensorflow作为tf
>>>#创建一个图形。
... a=tf.constant([1.0,2.0,3.0,4.0,5.0,6.0],shape=[2,3],name='a')
>>>b=tf.constant([1.0,2.0,3.0,4.0,5.0,6.0],shape=[3,2],name='b')
>>>c=tf.matmul(a,b)
>>>#创建一个将log_device_placement设置为True的会话。
... sess=tf.Session(config=tf.ConfigProto(log\u device\u placement=True))
2019-05-31 07:14:23.840474:I tensorflow/stream_executor/cuda/cuda_gpu executor.cc:998]成功
ssh://ubuntu@XX.XX.XX.XX:22/home/ubuntu/anaconda3/envs/tensorflow_p36/bin/python -u /home/ubuntu/.pycharm_helpers/pydev/pydevconsole.py --mode=server

Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) 
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.
PyDev console: using IPython 6.4.0
Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) 
[GCC 7.2.0] on linux

import tensorflow as tf
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
2019-05-31 07:17:03.883169: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-05-31 07:17:03.883577: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55be28eef280 executing computations on platform CUDA. Devices:
2019-05-31 07:17:03.883609: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): Tesla K80, Compute Capability 3.7
2019-05-31 07:17:03.886035: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2300060000 Hz
2019-05-31 07:17:03.886752: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55be28f56d50 executing computations on platform Host. Devices:
2019-05-31 07:17:03.886777: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
2019-05-31 07:17:03.886983: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: 
name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
pciBusID: 0000:00:1e.0
totalMemory: 11.17GiB freeMemory: 508.38MiB
2019-05-31 07:17:03.887009: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-31 07:17:03.887658: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-31 07:17:03.887681: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
2019-05-31 07:17:03.887697: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
2019-05-31 07:17:03.887881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 283 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7)
Device mapping:
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7
2019-05-31 07:17:03.889133: I tensorflow/core/common_runtime/direct_session.cc:317] Device mapping:
/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device
/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K80, pci bus id: 0000:00:1e.0, compute capability: 3.7
MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0
2019-05-31 07:17:03.890673: I tensorflow/core/common_runtime/placer.cc:1059] MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
a: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2019-05-31 07:17:03.890718: I tensorflow/core/common_runtime/placer.cc:1059] a: (Const)/job:localhost/replica:0/task:0/device:GPU:0
b: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2019-05-31 07:17:03.890750: I tensorflow/core/common_runtime/placer.cc:1059] b: (Const)/job:localhost/replica:0/task:0/device:GPU:0
[[22. 28.]
[49. 64.]]
ssh [host] "source ~/anaconda3/bin/activate [name of conda env] ; cd [pick a dir] ; [command]"