Python 如何减少VGG16中间层瓶颈特性的大小?

Python 如何减少VGG16中间层瓶颈特性的大小?,python,machine-learning,tensorflow,deep-learning,conv-neural-network,Python,Machine Learning,Tensorflow,Deep Learning,Conv Neural Network,我正在尝试将vgg16网络的c0nv4_3层连接到更快的R-CNN的RPN网络,而不是conv5_3。是vgg16网络的python代码。我更改了这些行: def _image_to_head(self, is_training, reuse=False): with tf.variable_scope(self._scope, self._scope, reuse=reuse): net = slim.repeat(self._image, 2, slim.conv2d,

我正在尝试将vgg16网络的c0nv4_3层连接到更快的R-CNN的RPN网络,而不是conv5_3。是vgg16网络的python代码。我更改了这些行:

def _image_to_head(self, is_training, reuse=False):
    with tf.variable_scope(self._scope, self._scope, reuse=reuse):
      net = slim.repeat(self._image, 2, slim.conv2d, 64, [3, 3],
                          trainable=False, scope='conv1')
      net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool1')
      net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3],
                        trainable=False, scope='conv2')
      net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool2')
      net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3],
                        trainable=is_training, scope='conv3')
      net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool3')
      net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3],
                        trainable=is_training, scope='conv4')
      net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool4')
      net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3],
                        trainable=is_training, scope='conv5')

    self._act_summaries.append(net)
    self._layers['head'] = net

return net
致:

如上所示,我删除了conv5和pool4层;因为我的对象很小,我希望得到更好的结果,但结果变得更糟。我想我需要在conv4的末尾添加一个解卷层?还是有别的办法


谢谢

还有一些方法可以减少瓶颈功能的长度

为什么不添加deconv

  • 您将使用随机值初始化deconv层
  • 你不是在微调网络,你只是在网络中向前传递
  • 因此,deconv的输出将随机化您的conv4功能
池层:

  • 平均池(基于窗口大小,它将返回该窗口的平均值)。所以,如果我们假设窗口(2,2)的值为[3,2,4,3],那么结果只有一个值:6

  • MaxPool(基于窗口大小,它将导致该窗口的最大值)。所以如果我们假设窗口(2,2)的值为[3,2,4,3],那么它只会得到一个值:3


查看池层

您是否使用预先培训过的模型?是否要提取瓶颈特征?@PramodPatil是的,我尝试使用vgg网络作为预训练模型,但我希望从conv4_3获得输出(头层),而不是conv5_3,因此特征图的分辨率将更高。我还要在这里做什么吗?谢谢
def _image_to_head(self, is_training, reuse=False):
    with tf.variable_scope(self._scope, self._scope, reuse=reuse):
      net = slim.repeat(self._image, 2, slim.conv2d, 64, [3, 3],
                          trainable=False, scope='conv1')
      net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool1')
      net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3],
                        trainable=False, scope='conv2')
      net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool2')
      net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3],
                        trainable=is_training, scope='conv3')
      net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool3')
      net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3],
                        trainable=is_training, scope='conv4')

    self._act_summaries.append(net)
    self._layers['head'] = net

return net