Python 由于内存使用过度,我的google colab会话正在崩溃

Python 由于内存使用过度,我的google colab会话正在崩溃,python,tensorflow,keras,google-colaboratory,Python,Tensorflow,Keras,Google Colaboratory,我正在训练CNN,每台2403张图像1280x720像素。这是我正在运行的代码: from tensorflow.keras.preprocessing.image import ImageDataGenerator import tensorflow as tf from tensorflow import keras from tensorflow.keras.layers import Conv2D,MaxPooling2D,Activation,Dense,Flatten,Dropout

我正在训练CNN,每台2403张图像1280x720像素。这是我正在运行的代码:

from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Conv2D,MaxPooling2D,Activation,Dense,Flatten,Dropout
model = keras.Sequential()

model.add(Conv2D(32, (3, 3), input_shape=(1280,720,3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(3))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])

train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)
# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1. / 255)

train_generator = train_datagen.flow_from_directory(
    '/gdrive/MyDrive/shot/training',
    target_size=(1280, 720),
    batch_size=640,
    class_mode='categorical')
history = model.fit(
    train_generator,
    steps_per_epoch= 2403//640,
    epochs= 15,
)

会话在第一个纪元之前就崩溃了。我能做些什么来减少RAM的使用吗?我还有其他选择吗?

看起来您的批量很大,消耗了所有RAM。因此,我建议首先尝试较小的批量大小,如32或64。此外,您的图像尺寸太大,您可以在最初进行实验时减小图像尺寸

train_generator = train_datagen.flow_from_directory(
    '/gdrive/MyDrive/shot/training',
    target_size=(256, 256),  # -> Change the image size
    batch_size=32,  # -> Reduce batch size
    class_mode='categorical'
)

看起来您的批量很大,消耗了所有RAM。因此,我建议首先尝试较小的批量大小,如32或64。此外,您的图像尺寸太大,您可以在最初进行实验时减小图像尺寸

train_generator = train_datagen.flow_from_directory(
    '/gdrive/MyDrive/shot/training',
    target_size=(256, 256),  # -> Change the image size
    batch_size=32,  # -> Reduce batch size
    class_mode='categorical'
)

你可能还想看看你能减少多少图像的尺寸,(1280720)在colab中似乎太难处理。你可能还想看看你能减少多少图像的尺寸,(1280720)在colab中似乎太难处理。