Deep learning 如何使用反褶积层/反冷却在caffe中用于ND斑点?
我正在尝试使用caffe中的反褶积层进行ND去冷却。但是,Deep learning 如何使用反褶积层/反冷却在caffe中用于ND斑点?,deep-learning,caffe,conv-neural-network,deconvolution,Deep Learning,Caffe,Conv Neural Network,Deconvolution,我正在尝试使用caffe中的反褶积层进行ND去冷却。但是,双线性重量填充不受支持。对于三维反贫困,我会: layer { name: "name" type: "Deconvolution" bottom: "bot" top: "top" param { lr_mult: 0 decay_mult: 0 } convolution_param { num_output: #output bias_term: false pad
双线性
重量填充不受支持。对于三维反贫困,我会:
layer {
name: "name"
type: "Deconvolution"
bottom: "bot"
top: "top"
param {
lr_mult: 0
decay_mult: 0
}
convolution_param {
num_output: #output
bias_term: false
pad: 0
kernel_size: #kernel
group: #output
stride: #stride
weight_filler {
type: "bilinear"
}
}
}
如何填充ND取消冷却的权重,如4D取消冷却(通道x深度x高度x宽度)。我可以只使用重量填充物吗?或者这会产生不好的结果吗
编辑
这里,他们使用Python的2D双线性填充程序:(link)[
我将其转换为3D的方法如下:
def upsample_filt(size):
"""
Make a 2D bilinear kernel suitable for upsampling of the given (h, w) size.
"""
factor = (size + 1) // 2
if size % 2 == 1:
center = factor - 1
else:
center = factor - 0.5
og = np.ogrid[:size, :size, :size]
return (1 - abs(og[0] - center) / factor) * \
(1 - abs(og[1] - center) / factor) * \
(1 - abs(og[2] - center) / factor)
def interp(net, layers):
"""
Set weights of each layer in layers to bilinear kernels for interpolation.
"""
for l in layers:
m, k, d, h, w = net.params[l][0].data.shape
if m != k and k != 1:
print 'input + output channels need to be the same or |output| == 1'
raise
if h != w or h != d or w != d:
print 'filters need to be square'
raise
filt = upsample_filt(h)
net.params[l][0].data[range(m), range(k), :, :, :] = filt
但是,我不是Python专家。这是正确的,还是有更简单的解决方案?谢谢。你有什么新的吗?我正在考虑5DNo中的反褶积,很抱歉不是@user8264谢谢。你有什么新的吗?我正在考虑5DNo中的反褶积,很抱歉不是@user8264
def upsample_filt(size):
"""
Make a 2D bilinear kernel suitable for upsampling of the given (h, w) size.
"""
factor = (size + 1) // 2
if size % 2 == 1:
center = factor - 1
else:
center = factor - 0.5
og = np.ogrid[:size, :size, :size]
return (1 - abs(og[0] - center) / factor) * \
(1 - abs(og[1] - center) / factor) * \
(1 - abs(og[2] - center) / factor)
def interp(net, layers):
"""
Set weights of each layer in layers to bilinear kernels for interpolation.
"""
for l in layers:
m, k, d, h, w = net.params[l][0].data.shape
if m != k and k != 1:
print 'input + output channels need to be the same or |output| == 1'
raise
if h != w or h != d or w != d:
print 'filters need to be square'
raise
filt = upsample_filt(h)
net.params[l][0].data[range(m), range(k), :, :, :] = filt