Python ';资源枯竭';尝试训练Keras模型时出现内存错误

Python ';资源枯竭';尝试训练Keras模型时出现内存错误,python,tensorflow,computer-vision,deep-learning,keras,Python,Tensorflow,Computer Vision,Deep Learning,Keras,我正在尝试训练一个用于二值图像分类问题的VGG19模型。我的数据集无法放入内存,因此我使用了批处理和模型的函数的.fit\u生成器 但是,即使尝试批量培训,我也会遇到以下错误: W tensorflow/core/common_runtime/bfc_allocator.cc:275]已用完 试图分配392.00MB的内存。有关内存状态,请参阅日志 W tensorflow/core/framework/op_kernel.cc:975]资源耗尽:OOM 当用形状分配张量时 以下是启动培训脚本时

我正在尝试训练一个用于二值图像分类问题的VGG19模型。我的数据集无法放入内存,因此我使用了批处理和
模型的
函数的
.fit\u生成器

但是,即使尝试批量培训,我也会遇到以下错误:

W tensorflow/core/common_runtime/bfc_allocator.cc:275]已用完 试图分配392.00MB的内存。有关内存状态,请参阅日志

W tensorflow/core/framework/op_kernel.cc:975]资源耗尽:OOM 当用形状分配张量时

以下是启动培训脚本时有关我的GPU的控制台输出:

Using TensorFlow backend.
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.so locally
Found 20000 images belonging to 2 classes.
Found 5000 images belonging to 2 classes.
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
name: GeForce GT 750M
major: 3 minor: 0 memoryClockRate (GHz) 1.085
pciBusID 0000:01:00.0
Total memory: 1.95GiB
Free memory: 1.74GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 750M, pci bus id: 0000:01:00.0)
我不知道,但我认为1.5+GB应该足够小批量培训了,对吗

脚本的完整输出非常大,我将把它的一部分粘贴到

下面是我的模型的代码:

from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard, ModelCheckpoint, ReduceLROnPlateau

class VGG19(object):
    def __init__(self, weights_path=None, train_folder='data/train', validation_folder='data/val'):
        self.weights_path = weights_path
        self.model = self._init_model()

        if weights_path:
            self.model.load_weights(weights_path)
        else:
            self.datagen = self._datagen()
            self.train_folder = train_folder
            self.validation_folder = validation_folder
            self.model.compile(
                loss='binary_crossentropy',
                optimizer='adam',
                metrics=['accuracy']
            )

    def fit(self, batch_size=32, nb_epoch=10):

        train_generator = self.datagen.flow_from_directory(
                self.train_folder, target_size=(224, 224),
                color_mode='rgb', class_mode='binary',
                batch_size=2
        )

        validation_generator = self.datagen.flow_from_directory(
            self.validation_folder, target_size=(224, 224),
            color_mode='rgb', class_mode='binary',
            batch_size=2
        )

        self.model.fit_generator(
            train_generator,
            samples_per_epoch=16,
            nb_epoch=1,
            verbose=1,
            validation_data=validation_generator,
            callbacks=[
                TensorBoard(log_dir='./logs', write_images=True),
                ModelCheckpoint(filepath='weights.{epoch:02d}-{val_loss:.2f}.hdf5', monitor='val_loss'),
                ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, min_lr=0.001)
            ],
            nb_val_samples=8
        )
    def evaluate(self, X, y, batch_size=32):
        return self.model.evaluate(
            X, y,
            batch_size=batch_size,
            verbose=1
        )

    def predict(self, X, batch_size=4, verbose=1):
        return self.model.predict(X, batch_size=batch_size, verbose=verbose)

    def predict_proba(self, X, batch_size=4, verbose=1):
        return self.model.predict_proba(X, batch_size=batch_size, verbose=verbose)

    def _init_model(self):
        model = Sequential()
        model.add(ZeroPadding2D((1, 1), input_shape=(224, 224, 3)))
        model.add(Convolution2D(64, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1,1)))
        model.add(Convolution2D(64, 3, 3, activation='relu'))
        model.add(MaxPooling2D((2, 2), strides=(2, 2)))

        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(128, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1,1)))
        model.add(Convolution2D(128, 3, 3, activation='relu'))
        model.add(MaxPooling2D((2, 2), strides=(2, 2)))

        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(256, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(256, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(256, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(256, 3, 3, activation='relu'))
        model.add(MaxPooling2D((2, 2), strides=(2, 2)))

        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(512, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(512, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(512, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(512, 3, 3, activation='relu'))
        model.add(MaxPooling2D((2, 2), strides=(2, 2)))

        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(512, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(512, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(512, 3, 3, activation='relu'))
        model.add(ZeroPadding2D((1, 1)))
        model.add(Convolution2D(512, 3, 3, activation='relu'))
        model.add(MaxPooling2D((2, 2), strides=(2, 2)))

        model.add(Flatten())
        model.add(Dense(4096, activation='relu'))
        model.add(Dropout(0.5))
        model.add(Dense(4096, activation='relu'))
        model.add(Dropout(0.5))
        model.add(Dense(1, activation='softmax'))

        return model

    def _datagen(self):
        return ImageDataGenerator(
            featurewise_center=True,
            samplewise_center=False,
            featurewise_std_normalization=True,
            samplewise_std_normalization=False,
            zca_whitening=False,
            rotation_range=20,
            width_shift_range=0.2,
            height_shift_range=0.2,
            horizontal_flip=True,
            vertical_flip=True
        )
我以以下方式运行模型:

vgg19 = VGG19(train_folder='data/train/train', validation_folder='data/val/val')
vgg19.fit(nb_epoch=1)
我的
data/train/train
data/val/val
文件夹由两个目录组成:
cats
dogs
,这样
ImageDataGenerator.flow\u from\u directory()
函数就可以正确地分离我的类


我做错了什么?只是VGG19对我的机器来说太大了,还是批量大小有问题

如何在我的机器上训练模型


PS:如果我没有输入训练脚本(即使它输出了许多类似的错误,如上面的粘贴库中的错误),输出的最后一行如下:

W tensorflow/core/common_runtime/bfc_allocator.cc:274] *****************************************************************************************xxxxxxxxxxx
W tensorflow/core/common_runtime/bfc_allocator.cc:275] Ran out of memory trying to allocate 392.00MiB.  See logs for memory state.
W tensorflow/core/framework/op_kernel.cc:975] Resource exhausted: OOM when allocating tensor with shape[25088,4096]
Traceback (most recent call last):
  File "train.py", line 6, in <module>
    vgg19.fit(nb_epoch=1)
  File "/home/denis/WEB/DeepLearning/CatsVsDogs/model/vgg19.py", line 84, in fit
    nb_val_samples=8
  File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 907, in fit_generator
    pickle_safe=pickle_safe)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1378, in fit_generator
    callbacks._set_model(callback_model)
  File "/usr/local/lib/python2.7/dist-packages/keras/callbacks.py", line 32, in _set_model
    callback._set_model(model)
  File "/usr/local/lib/python2.7/dist-packages/keras/callbacks.py", line 493, in _set_model
    self.sess = KTF.get_session()
  File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 111, in get_session
    _initialize_variables()
  File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 200, in _initialize_variables
    sess.run(tf.variables_initializer(uninitialized_variables))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 766, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 964, in _run
    feed_dict_string, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1014, in _do_run
    target_list, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1034, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[4096]
     [[Node: Variable_43/Assign = Assign[T=DT_FLOAT, _class=["loc:@Variable_43"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](Variable_43, Const_59)]]

Caused by op u'Variable_43/Assign', defined at:
  File "train.py", line 6, in <module>
    vgg19.fit(nb_epoch=1)
  File "/home/denis/WEB/DeepLearning/CatsVsDogs/model/vgg19.py", line 84, in fit
    nb_val_samples=8
  File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 907, in fit_generator
    pickle_safe=pickle_safe)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1351, in fit_generator
    self._make_train_function()
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 696, in _make_train_function
    self.total_loss)
  File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 387, in get_updates
    ms = [K.zeros(shape) for shape in shapes]
  File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 278, in zeros
    dtype, name)
  File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 182, in variable
    v = tf.Variable(value, dtype=_convert_string_dtype(dtype), name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 224, in __init__
    expected_shape=expected_shape)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 360, in _init_from_args
    validate_shape=validate_shape).op
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 47, in assign
    use_locking=use_locking, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2240, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1128, in __init__
    self._traceback = _extract_stack()

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[4096]
     [[Node: Variable_43/Assign = Assign[T=DT_FLOAT, _class=["loc:@Variable_43"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/gpu:0"](Variable_43, Const_59)]]

错误也发生了一些变化。尽管如此,这仍然是一个OOM错误:

在本例中,由于图形太大,所以会出现OOM错误。当一切都下降时,你试图分配的张量的形状是什么

无论如何,您可以尝试的第一件事是在内存中没有任何数据的情况下分配模型。还有其他东西还在运行(另一个jupyter笔记本,后台的其他一些模型服务)

此外,您还可以在最后几层中节省空间:

model.add(Dense(4096, activation='relu'))
model.add(Dense(4096, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

4096x4096矩阵相当大(而且立即返回1是个坏主意;)

如果你告诉我这个模型正在工作,我会非常惊讶

在1个输出(最后一层)上激活softmax没有意义。
softmax标准化层的输出,使其总和为1。。。如果你只有一个输出,它将一直是1!所以,如果你想要一个二进制概率,要么在1个输出上使用sigmoid,要么在2个输出上使用softmax

按照您的建议,我将第一个密度更改为1024,第二个密度更改为256,它开始提供稍微不同的输出。它实际上开始运行训练,但我仍然得到OOM错误,但以另一种方式,我得到的错误是:你能帮我找出代码中的其他错误吗?不幸的是,我不知道VGG16模型从我的头脑中,我不知道你在训练什么样的应用程序,但是:也许减少一半的图层是个好主意。。。使模型不那么深,使用更大的步幅,更小的输入,更多的maxpool层,任何使用更少内存的东西。如果学习不好,请逐步扩大你的网络。好的,非常感谢你的帮助!我将在你的帖子上打勾作为答案,因为你确实帮助我解决了我的问题,并指出了正确的方向。VGG模型适用于许多不同的课程。如果你只想找到几个类,你可能会减少很多模型。是的,这是一个很好的观点。我已经试过在2个输出上使用softmax,其中有分类熵损失,sigmoid有1个输出和二进制损失。不管怎样,我最终使用了一个小得多的型号,因为我的笔记本电脑不能处理一个大的。使用我的微型模型,验证集的准确率仅为80%,您是否使用tf后端?如果是,tf在gpu上的内存非常贪婪。。。所以确实要考虑减少你的网络规模:-它目前是巨大的…probablu需要几周的时间在这样的小型GPU上训练是的,你说得对,我使用的是tensorflow后端。我对深度学习还很陌生,所以我只是尝试一下,测试一下我的机器的功能。多亏了我在这里得到的有用的答案,所以,我会学得更快。再次感谢您的建议)
model.add(Dense(4096, activation='relu'))
model.add(Dense(4096, activation='relu'))
model.add(Dense(1, activation='sigmoid'))