Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/amazon-web-services/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon web services 如何充分利用资源_Amazon Web Services_Amazon Ec2_Tensorflow_Deep Learning_Gpu - Fatal编程技术网

Amazon web services 如何充分利用资源

Amazon web services 如何充分利用资源,amazon-web-services,amazon-ec2,tensorflow,deep-learning,gpu,Amazon Web Services,Amazon Ec2,Tensorflow,Deep Learning,Gpu,我正在训练CNN使用Tensorflow对CIFAR-10数据集进行分类。我正在AWS p2.xlarge实例(1个GPU,4个VCPU,61GB RAM)上运行Jupyter笔记本。我是用电脑设置的 训练需要很长时间。当我检查系统资源时,我看到大部分资源仍然可用 $ free -h total used free shared buffers cached Mem: 59G 3.5G

我正在训练CNN使用Tensorflow对CIFAR-10数据集进行分类。我正在AWS p2.xlarge实例(1个GPU,4个VCPU,61GB RAM)上运行Jupyter笔记本。我是用电脑设置的

训练需要很长时间。当我检查系统资源时,我看到大部分资源仍然可用

$ free -h
         total       used       free     shared    buffers     cached
Mem:           59G       3.5G        56G        15M        55M       854M
-/+ buffers/cache:       2.6G        57G
Swap:           0B         0B         0B


$ top
top - 18:10:47 up  1:53,  1 user,  load average: 0.47, 0.63, 0.69
Tasks: 134 total,   1 running, 133 sleeping,   0 stopped,   0 zombie
%Cpu(s): 19.1 us,  4.6 sy,  0.0 ni, 73.2 id,  0.0 wa,  0.0 hi,  0.3 si,  2.8 st
KiB Mem:  62881764 total,  3695184 used, 59186580 free,    56792 buffers
KiB Swap:        0 total,        0 used,        0 free.   875028 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                       
 2282 ubuntu    20   0  0.099t 2.192g 202828 S 248.2  3.7 141:55.88 python3                                                                                       


$ nvidia-smi 
Sat May  6 18:12:28 2017       
+------------------------------------------------------+                       
| NVIDIA-SMI 352.99     Driver Version: 352.99         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           On   | 0000:00:1E.0     Off |                    0 |
| N/A   54C    P0    67W / 149W |  11012MiB / 11519MiB |     54%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      2282    C   /usr/bin/python3                             10954MiB |
+-----------------------------------------------------------------------------+

如何检测瓶颈在哪里?还有,有没有利用所有系统资源的建议?

为了实现高性能,您可以采取许多技巧和改进,例如确保使用高性能输入管道,并利用软件管道技术。不幸的是,没有关于您的具体设置的进一步信息,我无法进一步诊断

有关背景阅读(提高性能的技巧和技巧),请参阅:

我建议从开放源代码的tensorflow基准脚本开始,可在以下网站获得: