Tensorflow 如何在mxnet中控制我们需要的内核

Tensorflow 如何在mxnet中控制我们需要的内核,tensorflow,kernel,convolution,mxnet,Tensorflow,Kernel,Convolution,Mxnet,例如:在tensorflow中,我们可以按照下面的方法来做,我们如何在mxnet中做同样的事情来控制内核,weights=weights*mask,非常感谢 if mask_type is not None: #C mask[:center_h, :, :, :] = 1 if mask_type == 'A': mask[center_h, :center_w, :, :] = 1 if mask_type == 'B':

例如:在tensorflow中,我们可以按照下面的方法来做,我们如何在mxnet中做同样的事情来控制内核,weights=weights*mask,非常感谢

if mask_type is not None:
      #C
      mask[:center_h, :, :, :] = 1
      if mask_type == 'A':
        mask[center_h, :center_w, :, :] = 1

      if mask_type == 'B':
        mask[center_h, :center_w+1, :, :] = 1

    else:
      mask[:, :, :, :] = 1

    weights_shape = [kernel_h, kernel_w, in_channel, num_outputs]
    weights = tf.get_variable("weights", weights_shape,
      tf.float32, tf.truncated_normal_initializer(stddev=0.1))
    weights = weights * mask
    biases = tf.get_variable("biases", [num_outputs],
          tf.float32, tf.constant_initializer(0.0))

    outputs = tf.nn.conv2d(inputs, weights, [1, stride_h, stride_w, 1], padding="SAME")
    outputs = tf.nn.bias_add(outputs, biases)
这是tensorflow的完整代码
def conv2d(inputs, num_outputs, kernel_shape, strides=[1, 1], mask_type=None, scope="conv2d"):
  with tf.variable_scope(scope) as scope:
    kernel_h, kernel_w = kernel_shape
    stride_h, stride_w = strides
    batch_size, height, width, in_channel = inputs.get_shape().as_list()

    center_h = kernel_h // 2
    center_w = kernel_w // 2

    assert kernel_h % 2 == 1 and kernel_w % 2 == 1, "kernel height and width must be odd number"
    mask = np.zeros((kernel_h, kernel_w, in_channel, num_outputs), dtype=np.float32)
    if mask_type is not None:
      #C
      mask[:center_h, :, :, :] = 1
      if mask_type == 'A':
        mask[center_h, :center_w, :, :] = 1

      if mask_type == 'B':
        mask[center_h, :center_w+1, :, :] = 1

    else:
      mask[:, :, :, :] = 1

    weights_shape = [kernel_h, kernel_w, in_channel, num_outputs]
    weights = tf.get_variable("weights", weights_shape,
      tf.float32, tf.truncated_normal_initializer(stddev=0.1))
    weights = weights * mask
    biases = tf.get_variable("biases", [num_outputs],
          tf.float32, tf.constant_initializer(0.0))

    outputs = tf.nn.conv2d(inputs, weights, [1, stride_h, stride_w, 1], padding="SAME")
    outputs = tf.nn.bias_add(outputs, biases)

    return outputs

在MXNet中,您可以使用自己的变量作为权重,方法是首先创建一个权重变量,然后将该变量用于卷积运算符。例如:

weight = mx.sym.Variable('weights', init=mx.initializer.Xavier())
conv1 = mx.sym.Convolution(data=data, weight=weight, kernel=(5,5), num_filter=20)
如果要像在代码中那样对权重应用遮罩:

mask = #create the mask you want
weight = weight * mask