Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/353.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何优化Tensorflow CNN?_Python_Python 3.x_Macos_Tensorflow_Conv Neural Network - Fatal编程技术网

Python 如何优化Tensorflow CNN?

Python 如何优化Tensorflow CNN?,python,python-3.x,macos,tensorflow,conv-neural-network,Python,Python 3.x,Macos,Tensorflow,Conv Neural Network,我对Tensorflow很陌生,所以如果我的问题被认为是无知的,我很抱歉 我有一个非常简单的CNN Tensorflow,它拍摄图像并输出另一个图像。batchsize只有5个,在两个历元之间运行需要几分钟,并且经常在5个历元之后崩溃(我在mac上使用的是Python3.6.5,内存为16Gb) 这是我的程序片段 learning_rate = 0.01 inputs_ = tf.placeholder(tf.float32, (None, 224, 224, 3), name='inputs'

我对Tensorflow很陌生,所以如果我的问题被认为是无知的,我很抱歉

我有一个非常简单的CNN Tensorflow,它拍摄图像并输出另一个图像。batchsize只有5个,在两个历元之间运行需要几分钟,并且经常在5个历元之后崩溃(我在mac上使用的是Python3.6.5,内存为16Gb)

这是我的程序片段

learning_rate = 0.01
inputs_ = tf.placeholder(tf.float32, (None, 224, 224, 3), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 224, 224, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs=inputs_, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 224x224x32
maxpool1 = tf.layers.max_pooling2d(conv1, pool_size=(2,2), strides=(2,2), padding='same')
# Now 112x112x32

conv2 = tf.layers.conv2d(inputs=maxpool1, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 112x112x32
maxpool2 = tf.layers.max_pooling2d(conv2, pool_size=(2,2), strides=(2,2), padding='same')
# Now 56x56x32

conv3 = tf.layers.conv2d(inputs=maxpool2, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 56x56x32
maxpool3 = tf.layers.max_pooling2d(conv3, pool_size=(2,2), strides=(2,2), padding='same')
# Now 28x28x32

conv4 = tf.layers.conv2d(inputs=maxpool3, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool4 = tf.layers.max_pooling2d(conv4, pool_size=(2,2), strides=(2,2), padding='same')
# Now 14x14x32
conv5 = tf.layers.conv2d(inputs=maxpool4, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool5 = tf.layers.max_pooling2d(conv5, pool_size=(2,2), strides=(2,2), padding='same')
# Now 7x7x32
conv6 = tf.layers.conv2d(inputs=maxpool5, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv6, pool_size=(2,2), strides=(2,2), padding='same')
# Now 4x4x16

### Decoder
upsample1 = tf.image.resize_images(encoded, size=(7,7), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 7x7x16
conv7 = tf.layers.conv2d(inputs=upsample1, filters=16, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_images(conv7, size=(14,14), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 14x14x16
conv8 = tf.layers.conv2d(inputs=upsample2, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_images(conv8, size=(28,28), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 28x28x32
conv9 = tf.layers.conv2d(inputs=upsample3, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32

upsample4 = tf.image.resize_images(conv9, size=(56,56), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 56x56x32
conv10 = tf.layers.conv2d(inputs=upsample3, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 56x56x32

upsample5 = tf.image.resize_images(conv10, size=(112,112), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 112x112x32
conv11 = tf.layers.conv2d(inputs=upsample5, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 112x112x32

upsample6 = tf.image.resize_images(conv11, size=(224,224), method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# Now 224x224x32
conv12 = tf.layers.conv2d(inputs=upsample6, filters=32, kernel_size=(3,3), padding='same', activation=tf.nn.relu)
# Now 224x224x32

logits = tf.layers.conv2d(inputs=conv12, filters=1, kernel_size=(3,3), padding='same', activation=None)
#Now 224x224x1
# Pass logits through sigmoid and calculate the cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)

# Get cost and define the optimizer
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(learning_rate).minimize(cost)

imagelist = ... #array of all images with 3 channels
imagelabellist = ... #array of all images with 1 channel
epochs = 15

for e in range(epochs):
            imgs_large = imagelist
            imgs_target_large = imagelabellist
            shaped_imgs = tf.image.resize_images(imgs_large, [224, 224])
            shaped_imgs_target = tf.image.resize_images(imgs_target_large, [224, 224])
            # Get images from the batch
            imgs = sess.run(shaped_imgs)
            imgs_target = sess.run(shaped_imgs_target)
            batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs_target})
这是CNN的输出

epoch: #1
0 minutes between epoch
epoch: #2
3 minutes between epoch
epoch: #3
3 minutes between epoch
epoch: #4
12 minutes between epoch
epoch: #5


我愿意接受任何关于如何解决这个问题的建议。谢谢。

tf.image.resize\u images
是一个图形操作,因此您将向图形添加更多节点(这解释了运行时间的增加)。在您的训练循环之前,添加
sess.graph.finalize()
如果正在添加节点,它将抛出一个错误来检查这一点


如果将
resize\u images
移动到循环之外,则应该可以解决问题。

tf.image.resize\u images是一个图形操作,因此您将向图形添加更多节点(这解释了运行时间的增加)。在您的训练循环之前,添加
sess.graph.finalize()
如果正在添加节点,它将抛出一个错误来检查这一点


如果将
resize\u图像
移到循环外,应该可以解决问题。

您的训练循环是什么样子的?您是如何传递数据的?使用training loop
tf.image.resize_images
编辑的代码可能是一个图形操作,因此您可能会向图形添加更多节点(这解释了运行时间的增加)。在您的训练循环之前,添加
sess.graph.finalize()
如果正在添加节点,它将抛出一个错误。谢谢,您是对的。我只是简单地把
resize_images
放在循环之外,就解决了这个问题。你的训练循环是什么样子的?您是如何传递数据的?使用training loop
tf.image.resize_images
编辑的代码可能是一个图形操作,因此您可能会向图形添加更多节点(这解释了运行时间的增加)。在您的训练循环之前,添加
sess.graph.finalize()
如果正在添加节点,它将抛出一个错误。谢谢,您是对的。我只是简单地把
resize_images
放在循环之外,就解决了这个问题。