Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/jpa/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Tensorflow 分组深度卷积性能_Tensorflow_Convolution - Fatal编程技术网

Tensorflow 分组深度卷积性能

Tensorflow 分组深度卷积性能,tensorflow,convolution,Tensorflow,Convolution,我正在尝试改进Tensorflow中ResNeXt实现的性能。David Berthelot提到了一个潜在的改进。我想将此应用到我的实现中-重塑+求和如何适应此情况 # one resnext block per figure 3c # see also https://arxiv.org/pdf/1611.05431.pdf def bottleneck(x, strides, dim): x = tf.layers.conv2d(x, filters=64, kernel_size=1,

我正在尝试改进Tensorflow中ResNeXt实现的性能。David Berthelot提到了一个潜在的改进。我想将此应用到我的实现中-重塑+求和如何适应此情况

# one resnext block per figure 3c
# see also https://arxiv.org/pdf/1611.05431.pdf
def bottleneck(x, strides, dim):
  x = tf.layers.conv2d(x, filters=64, kernel_size=1, strides=strides)
  x = tf.layers.batch_normalization(x, training=is_training)
  x = tf.nn.relu(x)
  w = tf.get_variable(name='depthwise_filter', shape=[3, 3, 64, cardinality])
  x = tf.nn.depthwise_conv2d_native(x, w, strides=1, padding='same')
  x = tf.layers.batch_normalization(x, training=is_training)
  x = tf.nn.relu(x)
  x = tf.layers.conv2d(x, filters=dim, kernel_size=1, strides=1)
  x = tf.layers.batch_normalization(x, training=is_training)
  return tf.nn.relu(x)

编辑:我认为这个实现是正确的,我只需要添加几个操作来提高性能。再看看David的评论,depthwise+重塑+求和不是一个单独的depthwise操作,而是其他方法;上面的代码没有计算瓶颈块版本3d的等效值。

深度卷积和分组卷积非常相似。分组卷积在多个通道组上应用一组独立的内核,而深度卷积在每个输入通道上应用一组独立的内核。至关重要的是,在这两种情况下,输入和输出通道之间的单个连接使用的权重在这两种情况下都不会与任何其他输入-输出通道对共享。因此,我们可以应用(正如那个人所说!)整形和求和来模拟分组卷积和深度卷积。这种方法是以牺牲内存为代价的,因为我们必须分配一个大数倍的张量来执行中间计算

深度卷积将单个输入通道映射到多个输出通道,分组卷积将输入通道块映射到输出通道块。如果我们想对128个通道输入应用32组的分组卷积,我们可以改为对128/32=4的通道乘法器应用深度卷积。输出张量表示等效分组卷积输出的分解版本-纵深卷积输出的前16个通道对应于分组卷积输出的前4个通道。我们可以将这些通道重塑为一组4x4空间,并沿其中一个新轴求和,以实现分组卷积输出的等效。在所有输出通道中,我们只需通过添加两个维度为4的新轴来重塑,求和,然后重塑回128个通道

# one resnext block per figure 3c
# see also https://arxiv.org/pdf/1611.05431.pdf
def bottleneck(x, strides, dim, is_training):
  input_channels = x.shape.as_list()[-1]
  bottleneck_depth = input_channels // 2
  x = tf.layers.conv2d(x, filters=bottleneck_depth, kernel_size=1, strides=strides)
  x = tf.layers.batch_normalization(x, training=is_training)
  x = tf.nn.relu(x)

  group_size = bottleneck_depth // cardinality
  w = tf.get_variable(name='depthwise_filter', shape=[3, 3, bottleneck_depth, group_size])
  x = tf.nn.depthwise_conv2d_native(x, w, strides=1, padding='same')
  depthwise_shape = x.shape.as_list()
  x = tf.reshape(x, depthwise_shape[:3] + [cardinality, group_size, group_size])
  x = tf.reduce_sum(x, axis=4)
  x = tf.reshape(x, depthwise_shape[:3] + [bottleneck_depth])

  x = tf.layers.batch_normalization(x, training=is_training)
  x = tf.nn.relu(x)
  x = tf.layers.conv2d(x, filters=dim, kernel_size=1, strides=1)
  x = tf.layers.batch_normalization(x, training=is_training)
  return tf.nn.relu(x)
编辑:似乎我没有正确制定重塑/求和公式。我已经更新了上面的代码示例,以反映我现在认为正确的转换。旧版本可简化为深度卷积,通道乘法器为1

我将使用权重固定为1的numpy来说明错误和正确的行为,以便更好地理解差异。我们将看一个简单的8通道输入,有两组

input = np.arange(8)
# => [0, 1, 2, 3, 4, 5, 6, 7]
# the result of applying a depthwise convolution with a channel multiplier of 2 and weights fixed at 1
depthwise_output = output.repeat(input, 4)
# => [0, 0, 0, 0, 1, 1, 1, 1, 2, 2, ..., 6, 6, 7, 7, 7, 7]
不正确的转换:

x = depthwise_output.reshape((8, 4))
# => [[0, 0, 0, 0],
#     [1, 1, 1, 1],
#     [2, 2, 2, 2],
#     [3, 3, 3, 3],
#     [4, 4, 4, 4],
#     [5, 5, 5, 5],
#     [6, 6, 6, 6],
#     [7, 7, 7, 7]]
x = x.sum(axis=1)
# => [ 0,  4,  8, 12, 16, 20, 24, 28]
x = depthwise_output.reshape((2, 4, 4))
# => [[[0, 0, 0, 0],
#      [1, 1, 1, 1],
#      [2, 2, 2, 2],
#      [3, 3, 3, 3]],
# 
#     [[4, 4, 4, 4],
#      [5, 5, 5, 5],
#      [6, 6, 6, 6],
#      [7, 7, 7, 7]]]
x = x.sum(axis=1)
# => [[ 6,  6,  6,  6],
#     [22, 22, 22, 22]])
x = x.reshape((8,))
# => [ 6,  6,  6,  6, 22, 22, 22, 22]
正确转换:

x = depthwise_output.reshape((8, 4))
# => [[0, 0, 0, 0],
#     [1, 1, 1, 1],
#     [2, 2, 2, 2],
#     [3, 3, 3, 3],
#     [4, 4, 4, 4],
#     [5, 5, 5, 5],
#     [6, 6, 6, 6],
#     [7, 7, 7, 7]]
x = x.sum(axis=1)
# => [ 0,  4,  8, 12, 16, 20, 24, 28]
x = depthwise_output.reshape((2, 4, 4))
# => [[[0, 0, 0, 0],
#      [1, 1, 1, 1],
#      [2, 2, 2, 2],
#      [3, 3, 3, 3]],
# 
#     [[4, 4, 4, 4],
#      [5, 5, 5, 5],
#      [6, 6, 6, 6],
#      [7, 7, 7, 7]]]
x = x.sum(axis=1)
# => [[ 6,  6,  6,  6],
#     [22, 22, 22, 22]])
x = x.reshape((8,))
# => [ 6,  6,  6,  6, 22, 22, 22, 22]

下面是我如何实现它的

class LayerCardinalConv(object):
"""Aggregated Residual Transformations for Deep Neural Networks https://arxiv.org/abs/1611.05431"""

def __init__(self, name, w, nin, card, use_bias=True, init='he'):
    self.group = nin // card
    with tf.name_scope(name):
        self.conv = tf.Variable(weight_init(nin, self.group, [*w, nin, self.group], init), name='conv')
        self.bias = tf.Variable(tf.zeros([nin]), name='bias') if use_bias else 0

def __call__(self, vin, train):
    s = tf.shape(vin)
    vout = tf.nn.depthwise_conv2d(vin, self.conv, strides=[1] * 4, padding='SAME')
    vout = tf.reshape(vout, [s[0], s[1], s[2], self.group, s[3]])
    vout = tf.reduce_sum(vout, 3)
    return vout + self.bias
注:

  • 例如,w是内核形状(3,3)
  • 输入通道数
  • 基数或组数

希望有帮助。

我意识到这并不是stackoverflow的最佳问题,但tensorflow不鼓励在他们的github问题跟踪器上提出使用问题。我相信这是正确的推理,但如果我错了,请纠正我。