Machine learning caffe中卷积核的可视化
我一直在遵循Caffe示例,从我的ConvNet中绘制卷积核。我在下面附上了一张我的内核的图片,但是它看起来与示例中的内核完全不同。我完全按照这个例子,有人知道问题是什么吗 我的网络是在一组模拟图像上训练的,有两个类,网络的性能非常好,大约80%的测试准确率Machine learning caffe中卷积核的可视化,machine-learning,neural-network,caffe,conv-neural-network,pycaffe,Machine Learning,Neural Network,Caffe,Conv Neural Network,Pycaffe,我一直在遵循Caffe示例,从我的ConvNet中绘制卷积核。我在下面附上了一张我的内核的图片,但是它看起来与示例中的内核完全不同。我完全按照这个例子,有人知道问题是什么吗 我的网络是在一组模拟图像上训练的,有两个类,网络的性能非常好,大约80%的测试准确率 嗯,在调用imshow时,可能需要将插值参数设置为“none”。这就是您所指的吗?要获得更平滑的过滤器,您可以尝试向conv1层添加少量L2权重衰减 另见 你用的是什么重量?这个网络是根据自然图像训练的吗?这张网的性能如何?你需要提供更多的
嗯,在调用imshow时,可能需要将插值参数设置为“none”。这就是您所指的吗?要获得更平滑的过滤器,您可以尝试向conv1层添加少量L2权重衰减 另见
你用的是什么重量?这个网络是根据自然图像训练的吗?这张网的性能如何?你需要提供更多的细节。我已经用更多的信息更新了这个问题,包括网络本身。在绘制过滤器之前,你加载了什么caffemodel文件?你能展示一些你的培训示例吗?@AnoopK.Prabhu谢谢你的夸奖
layer {
name: "input"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
mean_file: "/tmp/stage5/mean/mean.binaryproto"
}
data_param {
source: "/tmp/stage5/train/train-lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "input"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
mean_file: "/tmp/stage5/mean/mean.binaryproto"
}
data_param {
source: "/tmp/stage5/validation/validation-lmdb"
batch_size: 10
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
convolution_param {
num_output: 40
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool1"
top: "ip1"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1.0
}
param {
lr_mult: 2.0
}
inner_product_param {
num_output: 2
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
# learning rate and decay multipliers for the filters
param { lr_mult: 1 decay_mult: 1 }
# learning rate and decay multipliers for the biases
param { lr_mult: 2 decay_mult: 0 }
convolution_param {
num_output: 96 # learn 96 filters
kernel_size: 11 # each filter is 11x11
stride: 4 # step 4 pixels between each filter application
weight_filler {
type: "gaussian" # initialize the filters from a Gaussian
std: 0.01 # distribution with stdev 0.01 (default mean: 0)
}
bias_filler {
type: "constant" # initialize the biases to zero (0)
value: 0
}
}
}