Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/user-interface/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Keras 25Gb内存Google Colab中Mnist数字崩溃的FID分数_Keras_Ram_Mnist - Fatal编程技术网

Keras 25Gb内存Google Colab中Mnist数字崩溃的FID分数

Keras 25Gb内存Google Colab中Mnist数字崩溃的FID分数,keras,ram,mnist,Keras,Ram,Mnist,我试图计算一个可变自动编码器(在Keras中)的FID分数,以测量生成的MNIST数字的质量。我有10.000个维度(28,28,1)的样本,我需要在(299,299,3)中进行重塑,以便将它们输入到Inception_v3中以计算FID。这是我执行此操作的代码: from keras.applications.inception_v3 import preprocess_input from keras.applications.inception_v3 import InceptionV3

我试图计算一个可变自动编码器(在Keras中)的FID分数,以测量生成的MNIST数字的质量。我有10.000个维度(28,28,1)的样本,我需要在(299,299,3)中进行重塑,以便将它们输入到Inception_v3中以计算FID。这是我执行此操作的代码:

from keras.applications.inception_v3 import preprocess_input
from keras.applications.inception_v3 import InceptionV3

sample_size = 4000

z_sample = np.random.normal(0, 1, size=(sample_size, latent_dim))
sample = np.random.randint(0, len(X_test), size=sample_size)
X_gen = decoder.predict(z_sample)
X_real = X_test[sample]

X_gen = scale_images(X_gen, (299, 299, 1))
X_real = scale_images(X_real, (299, 299, 1))
print('Scaled', X_gen.shape, X_real.shape)

X_gen_t = preprocess_input(X_gen)
X_real_t = preprocess_input(X_real)

X_gen = np.zeros(shape=(sample_size, 299, 299, 3))
X_real = np.zeros(shape=(sample_size, 299, 299, 3))
for i in range(3):
    X_gen[:, :, :, i] = X_gen_t[:, :, :, 0]
    X_real[:, :, :, i] = X_real_t[:, :, :, 0]
print('Final', X_gen.shape, X_real.shape)
但是当我生成X_gen和X_real时

X_gen = np.zeros(shape=(sample_size, 299, 299, 3))
Colab会话崩溃,因为此操作似乎占用了25Gb的RAM。为什么会发生这种情况?有更好的方法计算MNIST数字的FID分数吗