Neural network 在交叉验证期间释放GPU内存

Neural network 在交叉验证期间释放GPU内存,neural-network,gpu,theano,keras,cross-validation,Neural Network,Gpu,Theano,Keras,Cross Validation,我正试图在图像分类网络上运行交叉验证,使用Keras和Theano后端,使用scikit learn KFold分割数据。然而,培训运行良好的3倍,我得到了一个内存不足错误的GPU 我没有做任何事情来释放GPU内存在每个折叠结束。有人能告诉我在开始新的折叠之前是否可以清除GPU内存。我最近遇到了同样的问题,这不是一个很好的解决方案,因为它不能真正清除内存 但是,我的建议是创建+编译一次模型并保存初始权重。然后,在每次折叠开始时重新加载砝码 类似下面的代码: from sklearn.model_

我正试图在图像分类网络上运行交叉验证,使用Keras和Theano后端,使用scikit learn KFold分割数据。然而,培训运行良好的3倍,我得到了一个内存不足错误的GPU


我没有做任何事情来释放GPU内存在每个折叠结束。有人能告诉我在开始新的折叠之前是否可以清除GPU内存。

我最近遇到了同样的问题,这不是一个很好的解决方案,因为它不能真正清除内存

但是,我的建议是创建+编译一次模型并保存初始权重。然后,在每次折叠开始时重新加载砝码

类似下面的代码:

from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from functools import partial
import numpy as np
from keras.applications import VGG16

# We create our model only once
def create_model():
    model_vgg16_conv = VGG16(weights='imagenet', include_top=True)

    model_vgg16_conv.compile(optimizer="adam", loss="mean_squared_error")
    return model_vgg16_conv, model_vgg16_conv.get_weights()

# we initialize it multiple times
def init_weight(same_old_model, first_weights):
    ## we can uncomment the line below to reshufle the weights themselves so they are not exactly the same between folds
    ## weights = [np.random.permutation(x.flat).reshape(x.shape) for x in first_weights]

    same_old_model.set_weights(weights)


model_vgg16_conv, weights = create_model()


# we create just random data compliant with the vgg16 architecture and the 1000 imagenet labels
data = np.random.randint(0,255, size=(100, 224,224,3))
labels = np.random.randint(0,1, size=(100, 1000))

cvscores = []
kfold = KFold(n_splits=10, shuffle=True)
for train, test in kfold.split(data, labels):
    print("Initializing Weights...")
    ## instead of creating a new model, we just reset its weights
    init_weight(model_vgg16_conv, weights)

    # fit as usual, but using the split that came from KFold
    model_vgg16_conv.fit(data[train], labels[train], epochs=2)

    scores = model_vgg16_conv.evaluate(data[test], labels[test])

    #evaluation
    print("%s: %.2f%%" % (model_vgg16_conv.metrics_names[0], scores))
    cvscores.append(scores)

print("%.2f (+/- %.2f)" % (np.mean(cvscores), np.std(cvscores)))