Neural network 如何为较少的JPG设置Caffe imagenet_solver.prototxt文件,程序在迭代0后退出

Neural network 如何为较少的JPG设置Caffe imagenet_solver.prototxt文件,程序在迭代0后退出,neural-network,deep-learning,caffe,Neural Network,Deep Learning,Caffe,我们需要帮助理解用于较小训练集(6000 JPG)和val(170 JPG)JPG的参数。我们的执行被终止,并在迭代0中的测试分数为0/1后退出 我们正在尝试在caffe网站教程上运行imagenet示例,网址为 http://caffe.berkeleyvision.org/gathered/examples/imagenet.html. 我们使用6000个jpeg的训练集和170个jpeg图像的val集,而不是使用包中的全套ILSVRC2图像。按照说明,它们分别是train和val目录

我们需要帮助理解用于较小训练集(6000 JPG)和val(170 JPG)JPG的参数。我们的执行被终止,并在迭代0中的测试分数为0/1后退出

我们正在尝试在caffe网站教程上运行imagenet示例,网址为

http://caffe.berkeleyvision.org/gathered/examples/imagenet.html.  
我们使用6000个jpeg的训练集和170个jpeg图像的val集,而不是使用包中的全套ILSVRC2图像。按照说明,它们分别是train和val目录中的256 x 256 jpeg文件。我们运行脚本以获取辅助数据:

./data/ilsvrc12/get_ilsvrc_aux.sh
train.txt和val.txt文件设置为描述每个jpeg文件的两种可能类别之一。 然后,我们运行脚本来计算平均图像数据,这些数据似乎运行正常:

./examples/imagenet/make_imagenet_mean.sh
我们使用了教程中为imagenet_train.prototxt和imagenet_val.prototxt提供的模型定义。 由于我们在更少的图像上进行培训,我们对imagenet_solver.prototxt进行了如下修改:

train_net: "./imagenet_train.prototxt"
test_net: "./imagenet_val.prototxt"
test_iter: 3 
test_interval: 10
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 10
display: 20
max_iter: 45
momentum: 0.9
weight_decay: 0.0005
snapshot: 10
snapshot_prefix: "caffe_imagenet_train"
solver_mode: CPU
当我们使用以下方式运行它时:

./train_imagenet.sh
我们在挂起的位置获得以下输出:

.......
.......
I0520 23:07:53.175761  4678 net.cpp:85] drop7 <- fc7
I0520 23:07:53.175791  4678 net.cpp:99] drop7 -> fc7 (in-place)
I0520 23:07:53.176246  4678 net.cpp:126] Top shape: 50 4096 1 1  (204800)
I0520 23:07:53.176275  4678 net.cpp:152] drop7 needs backward  computation.
I0520 23:07:53.176296  4678 net.cpp:75] Creating Layer fc8
I0520 23:07:53.176306  4678 net.cpp:85] fc8 <- fc7
I0520 23:07:53.176314  4678 net.cpp:111] fc8 -> fc8
I0520 23:07:53.184213  4678 net.cpp:126] Top shape: 50 1000 1 1 (50000)
I0520 23:07:53.184908  4678 net.cpp:152] fc8 needs backward computation.
I0520 23:07:53.185607  4678 net.cpp:75] Creating Layer prob
I0520 23:07:53.186135  4678 net.cpp:85] prob <- fc8
I0520 23:07:53.186538  4678 net.cpp:111] prob -> prob
I0520 23:07:53.187166  4678 net.cpp:126] Top shape: 50 1000 1 1 (50000)
I0520 23:07:53.187696  4678 net.cpp:152] prob needs backward computation.
I0520 23:07:53.188244  4678 net.cpp:75] Creating Layer accuracy
I0520 23:07:53.188431  4678 net.cpp:85] accuracy <- prob
I0520 23:07:53.188540  4678 net.cpp:85] accuracy <- label
I0520 23:07:53.188870  4678 net.cpp:111] accuracy -> accuracy
I0520 23:07:53.188907  4678 net.cpp:126] Top shape: 1 2 1 1 (2)
I0520 23:07:53.188915  4678 net.cpp:152] accuracy needs backward computation.
I0520 23:07:53.188922  4678 net.cpp:163] This network produces output accuracy
I0520 23:07:53.188942  4678 net.cpp:181] Collecting Learning Rate and Weight Decay.
I0520 23:07:53.188954  4678 net.cpp:174] Network initialization done.
I0520 23:07:53.188961  4678 net.cpp:175] Memory required for Data 210114408
I0520 23:07:53.189008  4678 solver.cpp:49] Solver scaffolding done.
I0520 23:07:53.189018  4678 solver.cpp:61] Solving CaffeNet
I0520 23:07:53.189033  4678 solver.cpp:106] Iteration 0, Testing net
I0520 23:09:06.699695  4678 solver.cpp:142] Test score #0: 0
I0520 23:09:06.700203  4678 solver.cpp:142] Test score #1: 7.07406
Killed
Done.
。。。。。。。
.......
I0520 23:07:53.175761 4678净。cpp:85]下降7 fc7(到位)
I0520 23:07:53.176246 4678净。cpp:126]顶部形状:50 4096 1 1(204800)
I0520 23:07:53.176275 4678 net.cpp:152]drop7需要反向计算。
I0520 23:07:53.176296 4678 net.cpp:75]创建层fc8
I0520 23:07:53.176306 4678净。cpp:85]fc8 fc8
I0520 23:07:53.184213 4678净。cpp:126]顶部形状:50 1000 1 1(50000)
I0520 23:07:53.184908 4678净。cpp:152]fc8需要反向计算。
I0520 23:07:53.185607 4678 net.cpp:75]创建层问题
I0520 23:07:53.186135 4678净。cpp:85]概率概率概率
I0520 23:07:53.187166 4678净。cpp:126]顶部形状:50 1000 1 1(50000)
I0520 23:07:53.187696 4678 net.cpp:152]prob需要反向计算。
I0520 23:07:53.188244 4678 net.cpp:75]创建图层的精度

I0520 23:07:53.188431 4678 net.cpp:85]准确度听起来像是内存问题,可能是被Linux内存不足杀手杀死的。谢谢-事实证明是正确的-添加了更多Gb的ram,工作正常!!!