Tensorflow Keras内存不足的后果

Tensorflow Keras内存不足的后果,tensorflow,keras,out-of-memory,gpu,Tensorflow,Keras,Out Of Memory,Gpu,如果此问题与本主题无关,请随时参考其他StackExchange网站。:-) 我正在使用Keras,在我的GPU(GeForce GTX 970,~4G)上的内存非常有限。因此,我在使用批处理大小设置为高于某个级别的Keras时内存不足(OOM)。降低批量大小我没有这个问题,但Keras输出以下警告: 2019-01-02 09:47:03.173259: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_

如果此问题与本主题无关,请随时参考其他StackExchange网站。:-)

我正在使用Keras,在我的GPU(GeForce GTX 970,~4G)上的内存非常有限。因此,我在使用批处理大小设置为高于某个级别的Keras时内存不足(OOM)。降低批量大小我没有这个问题,但Keras输出以下警告:

2019-01-02 09:47:03.173259: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.57GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.211139: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.68GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.268074: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.95GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.685032: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.39GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.732304: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.56GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.850711: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.39GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.879135: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.48GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.963522: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.42GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:03.984897: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.47GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2019-01-02 09:47:04.058733: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.08GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
这些警告对我这个用户意味着什么?这些性能提升是什么?这是否意味着它只是计算速度更快,或者我甚至可以在更好的验证损失方面获得更好的结果


在我的设置中,我将Keras与Tensorflow后端一起使用,Tensorflow gpu==1.8.0。

这意味着培训在速度方面会损失一些效率,因为gpu不能用于某些操作。不过,损失的结果不应受到影响


为了避免此问题,最佳做法是减少批量大小以有效利用可用的GPU内存。

您使用的是
tensorflow
还是
tensorflow GPU
?使用必要的信息编辑问题。因此,应将批量大小减小到不会触发此消息的大小?我个人只在培训一开始就收到了这条信息,然后就不再出现了。