Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/300.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python TFLearn自动编码器分配所有内存_Python_Tensorflow_Tflearn - Fatal编程技术网

Python TFLearn自动编码器分配所有内存

Python TFLearn自动编码器分配所有内存,python,tensorflow,tflearn,Python,Tensorflow,Tflearn,我试图用TF-Learn构建一个简单的自动编码器。训练图像的分辨率为150×150像素(和3个通道),我用TFLearn在h5df文件中对其进行了转换。问题是网络会立即分配所有16 GB的可用内存 这是我的密码: h5_file = h5py.File(os.path.join(data_folder, 'dataset150-150.h5'), 'r') X = h5_file['X'] Y = h5_file['X'] batch_size = 8 # Building the enco

我试图用TF-Learn构建一个简单的自动编码器。训练图像的分辨率为150×150像素(和3个通道),我用TFLearn在h5df文件中对其进行了转换。问题是网络会立即分配所有16 GB的可用内存

这是我的密码:

h5_file = h5py.File(os.path.join(data_folder, 'dataset150-150.h5'), 'r')
X = h5_file['X']
Y = h5_file['X']

batch_size = 8

# Building the encoder
encoder = tflearn.input_data(shape=[batch_size, 150, 150, 3], name='input')
# Flatten the input layer
encoder = tflearn.reshape(encoder, new_shape=[batch_size, 67500])
encoder = tflearn.fully_connected(encoder, 67500)
encoder = tflearn.fully_connected(encoder, 512)
hidden = tflearn.fully_connected(encoder, 16)
decoder = tflearn.fully_connected(hidden, 512)
decoder = tflearn.fully_connected(decoder, 67500)
# Reshape the input layer to image shape
decoder = tf.reshape(decoder, [batch_size,150,150,3])

# Regression, with mean square error
net = tflearn.regression(decoder, optimizer='adam', learning_rate=0.001, loss='mean_square', metric=None)

# Training the auto encoder
model = tflearn.DNN(net, tensorboard_verbose=3, tensorboard_dir="./AutoEncoder")
model.fit(X, Y, batch_size=batch_size)
也许有人看到了我的错误?提前谢谢