Neural network Caffe Python层中的反向传递不被调用/工作?
我尝试使用Caffe在Python中实现一个简单的丢失层,但没有成功。作为参考,我发现有几个层是用Python实现的,包括、和 从Caffe文档/示例提供的Neural network Caffe Python层中的反向传递不被调用/工作?,neural-network,deep-learning,caffe,pycaffe,Neural Network,Deep Learning,Caffe,Pycaffe,我尝试使用Caffe在Python中实现一个简单的丢失层,但没有成功。作为参考,我发现有几个层是用Python实现的,包括、和 从Caffe文档/示例提供的EuclideanLossLayer开始,我无法让它工作并开始调试。即使使用这个简单的TestLayer: def setup(self, bottom, top): """ Checks the correct number of bottom inputs. :param b
EuclideanLossLayer
开始,我无法让它工作并开始调试。即使使用这个简单的TestLayer
:
def setup(self, bottom, top):
"""
Checks the correct number of bottom inputs.
:param bottom: bottom inputs
:type bottom: [numpy.ndarray]
:param top: top outputs
:type top: [numpy.ndarray]
"""
print 'setup'
def reshape(self, bottom, top):
"""
Make sure all involved blobs have the right dimension.
:param bottom: bottom inputs
:type bottom: caffe._caffe.RawBlobVec
:param top: top outputs
:type top: caffe._caffe.RawBlobVec
"""
print 'reshape'
top[0].reshape(bottom[0].data.shape[0], bottom[0].data.shape[1], bottom[0].data.shape[2], bottom[0].data.shape[3])
def forward(self, bottom, top):
"""
Forward propagation.
:param bottom: bottom inputs
:type bottom: caffe._caffe.RawBlobVec
:param top: top outputs
:type top: caffe._caffe.RawBlobVec
"""
print 'forward'
top[0].data[...] = bottom[0].data
def backward(self, top, propagate_down, bottom):
"""
Backward pass.
:param bottom: bottom inputs
:type bottom: caffe._caffe.RawBlobVec
:param propagate_down:
:type propagate_down:
:param top: top outputs
:type top: caffe._caffe.RawBlobVec
"""
print 'backward'
bottom[0].diff[...] = top[0].diff[...]
我无法让Python层正常工作。学习任务相当简单,因为我只是试图预测实数是正还是负。相应的数据生成如下并写入LMDBs:
N = 10000
N_train = int(0.8*N)
images = []
labels = []
for n in range(N):
image = (numpy.random.rand(1, 1, 1)*2 - 1).astype(numpy.float)
label = int(numpy.sign(image))
images.append(image)
labels.append(label)
将数据写入LMDB应该是正确的,因为使用Caffe提供的MNIST数据集进行的测试表明没有问题。网络定义如下:
net.data, net.labels = caffe.layers.Data(batch_size = batch_size, backend = caffe.params.Data.LMDB,
source = lmdb_path, ntop = 2)
net.fc1 = caffe.layers.Python(net.data, python_param = dict(module = 'tools.layers', layer = 'TestLayer'))
net.score = caffe.layers.TanH(net.fc1)
net.loss = caffe.layers.EuclideanLoss(net.score, net.labels)
通过以下方式手动完成求解:
for iteration in range(iterations):
solver.step(step)
相应的prototxt文件如下:
solver.prototxt
:
weight_decay: 0.0005
test_net: "tests/test.prototxt"
snapshot_prefix: "tests/snapshot_"
max_iter: 1000
stepsize: 1000
base_lr: 0.01
snapshot: 0
gamma: 0.01
solver_mode: CPU
train_net: "tests/train.prototxt"
test_iter: 0
test_initialization: false
lr_policy: "step"
momentum: 0.9
display: 100
test_interval: 100000
layer {
name: "data"
type: "Data"
top: "data"
top: "labels"
data_param {
source: "tests/train_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "fc1"
type: "Python"
bottom: "data"
top: "fc1"
python_param {
module: "tools.layers"
layer: "TestLayer"
}
}
layer {
name: "score"
type: "TanH"
bottom: "fc1"
top: "score"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "score"
bottom: "labels"
top: "loss"
}
layer {
name: "data"
type: "Data"
top: "data"
top: "labels"
data_param {
source: "tests/test_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "fc1"
type: "Python"
bottom: "data"
top: "fc1"
python_param {
module: "tools.layers"
layer: "TestLayer"
}
}
layer {
name: "score"
type: "TanH"
bottom: "fc1"
top: "score"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "score"
bottom: "labels"
top: "loss"
}
train.prototxt
:
weight_decay: 0.0005
test_net: "tests/test.prototxt"
snapshot_prefix: "tests/snapshot_"
max_iter: 1000
stepsize: 1000
base_lr: 0.01
snapshot: 0
gamma: 0.01
solver_mode: CPU
train_net: "tests/train.prototxt"
test_iter: 0
test_initialization: false
lr_policy: "step"
momentum: 0.9
display: 100
test_interval: 100000
layer {
name: "data"
type: "Data"
top: "data"
top: "labels"
data_param {
source: "tests/train_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "fc1"
type: "Python"
bottom: "data"
top: "fc1"
python_param {
module: "tools.layers"
layer: "TestLayer"
}
}
layer {
name: "score"
type: "TanH"
bottom: "fc1"
top: "score"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "score"
bottom: "labels"
top: "loss"
}
layer {
name: "data"
type: "Data"
top: "data"
top: "labels"
data_param {
source: "tests/test_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "fc1"
type: "Python"
bottom: "data"
top: "fc1"
python_param {
module: "tools.layers"
layer: "TestLayer"
}
}
layer {
name: "score"
type: "TanH"
bottom: "fc1"
top: "score"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "score"
bottom: "labels"
top: "loss"
}
test.prototxt
:
weight_decay: 0.0005
test_net: "tests/test.prototxt"
snapshot_prefix: "tests/snapshot_"
max_iter: 1000
stepsize: 1000
base_lr: 0.01
snapshot: 0
gamma: 0.01
solver_mode: CPU
train_net: "tests/train.prototxt"
test_iter: 0
test_initialization: false
lr_policy: "step"
momentum: 0.9
display: 100
test_interval: 100000
layer {
name: "data"
type: "Data"
top: "data"
top: "labels"
data_param {
source: "tests/train_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "fc1"
type: "Python"
bottom: "data"
top: "fc1"
python_param {
module: "tools.layers"
layer: "TestLayer"
}
}
layer {
name: "score"
type: "TanH"
bottom: "fc1"
top: "score"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "score"
bottom: "labels"
top: "loss"
}
layer {
name: "data"
type: "Data"
top: "data"
top: "labels"
data_param {
source: "tests/test_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "fc1"
type: "Python"
bottom: "data"
top: "fc1"
python_param {
module: "tools.layers"
layer: "TestLayer"
}
}
layer {
name: "score"
type: "TanH"
bottom: "fc1"
top: "score"
}
layer {
name: "loss"
type: "EuclideanLoss"
bottom: "score"
bottom: "labels"
top: "loss"
}
我尝试跟踪它,在TestLayer
的backward
和foward
方法中添加调试消息,在求解过程中只调用forward
方法(请注意,不执行任何测试,调用只能与求解相关)。类似地,我在python_layer.hpp中添加了调试消息:
virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
const vector<Blob<Dtype>*>& top) {
LOG(INFO) << "cpp forward";
self_.attr("forward")(bottom, top);
}
virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
LOG(INFO) << "cpp backward";
self_.attr("backward")(top, propagate_down, bottom);
}
virtual void Forward\u cpu(常量向量和底部,
常量向量(上){
LOG(INFO)这是预期的行为,因为在python层的“下方”没有任何层实际需要梯度来计算权重更新。Caffe注意到这一点,并跳过此类层的反向计算,因为这将浪费时间
如果在网络初始化时需要在日志中进行反向计算,Caffe将打印所有层。
在您的情况下,您应该看到如下内容:
fc1 does not need backward computation.
如果在“Python”层(例如,Data->InnerProduct->Python->Loss
)下放置一个“InnerProduct”或“卷积”层,则需要进行反向计算,并调用反向方法。除了的答案之外,还可以通过指定
force_backward: true
在您的网络协议中。
有关更多信息,请参见中的注释。即使我按照David Stutz的建议设置了force\u backward:true
,但我的操作仍然无效。我发现,我忘记了在目标类的索引中将最后一层的差异设置为1
正如Mohit Jain在其caffe用户回答中所描述的那样,如果您使用tabby猫进行ImageNet分类,在进行前向传递后,您必须执行以下操作:
net.blobs['prob'].diff[0][281] = 1 # 281 is tabby cat. diff shape: (1, 1000)
请注意,您必须根据最后一层的名称更改'prob'
,这通常是softmax和'prob'
下面是一个基于我的例子:
deploy.prototxt(它松散地基于VGG16,只是为了显示文件的结构,但我没有测试它):
main.py:
import caffe
prototxt = 'deploy.prototxt'
model_file = 'smaller_vgg.caffemodel'
net = caffe.Net(model_file, prototxt, caffe.TRAIN) # not sure if TEST works as well
image = cv2.imread('tabbycat.jpg', cv2.IMREAD_UNCHANGED)
net.blobs['data'].data[...] = image[np.newaxis, np.newaxis, :]
net.blobs['prob'].diff[0, 298] = 1
net.forward()
backout = net.backward()
# access grad from backout['data'] or net.blobs['data'].diff
按照你的代码,为什么net.blobs['prob'].diff[0298]在net.backward()之后不再是1。net.backward()会改变你的预设值吗?@Stone我不确定。我只对引导式backprop和gradcam使用了这段代码。可能是Caffe在每次迭代后重置diff
(如果这是真的,那么我猜所有的渐变也必须被重置)设置net.blobs['prob'].diff[0298]=1
在每次backward()之后(
调用修复它?设置net.blobs['prob'].diff[0298]=1
在每次backward()之后从本质上讲,调用保证其值仍然为1。我担心的是,如果Caffe在每次迭代后重置diff
(如您所说),则无法从net.blobs[layer\u name].diff
在net.backward()之后访问grad。此外,如果访问net.blobs[layer\u name].diff
在net.backward()之后是正确的方法,那么最顶层prob
(net.blobs['prob'].diff
)的梯度应该保持原样(例如net.blobs['prob'].diff[0298]=1
),因为梯度计算从prob
层开始。@Stone你说得对,访问net.blobs[layer].diff
在backward()
之后。然后,我不明白为什么它会重置prob
diff。如果你发现了什么,请告诉我。如果你没有看到Caffe在backward()之后重置diff
,这可能是我这边唯一的问题
在您这方面,我一定错过了一些配置。谢谢!