Deep learning 在caffe中使用SPP层导致检查失败:pad_w_<;内核(1对1)

Deep learning 在caffe中使用SPP层导致检查失败:pad_w_<;内核(1对1),deep-learning,caffe,Deep Learning,Caffe,好的,我有一个关于在caffe中使用SPP层的问题。 这是一个后续问题 当使用SPP层时,我得到下面的错误输出。 当到达spp层时,图像似乎变得太小了? 我使用的图像很小。宽度范围在10到20像素之间,高度范围在30到35像素之间 I0719 12:18:22.553256 2114932736 net.cpp:406] spatial_pyramid_pooling <- conv2 I0719 12:18:22.553261 2114932736 net.cpp:380] spatia

好的,我有一个关于在caffe中使用SPP层的问题。 这是一个后续问题

当使用SPP层时,我得到下面的错误输出。 当到达spp层时,图像似乎变得太小了? 我使用的图像很小。宽度范围在10到20像素之间,高度范围在30到35像素之间

I0719 12:18:22.553256 2114932736 net.cpp:406] spatial_pyramid_pooling <- conv2
I0719 12:18:22.553261 2114932736 net.cpp:380] spatial_pyramid_pooling -> pool2
F0719 12:18:22.553505 2114932736 pooling_layer.cpp:74] Check failed: pad_w_ < kernel_w_ (1 vs. 1) 
*** Check failure stack trace: ***
    @        0x106afcb6e  google::LogMessage::Fail()
    @        0x106afbfbe  google::LogMessage::SendToLog()
    @        0x106afc53a  google::LogMessage::Flush()
    @        0x106aff86b  google::LogMessageFatal::~LogMessageFatal()
    @        0x106afce55  google::LogMessageFatal::~LogMessageFatal()
    @        0x1068dc659  caffe::PoolingLayer<>::LayerSetUp()
    @        0x1068ffd98  caffe::SPPLayer<>::LayerSetUp()
    @        0x10691123f  caffe::Net<>::Init()
    @        0x10690fefe  caffe::Net<>::Net()
    @        0x106927ef8  caffe::Solver<>::InitTrainNet()
    @        0x106927325  caffe::Solver<>::Init()
    @        0x106926f95  caffe::Solver<>::Solver()
    @        0x106935b46  caffe::SGDSolver<>::SGDSolver()
    @        0x10693ae52  caffe::Creator_SGDSolver<>()
    @        0x1067e78f3  train()
    @        0x1067ea22a  main
    @     0x7fff9a3ad5ad  start
    @                0x5  (unknown)
I0719 12:18:22.553256 2114932736 net.cpp:406]空间金字塔池2
F0719 12:18:22.553505 2114932736池_层。cpp:74]检查失败:pad_w_
我是对的,我的图像太小了。 我换了网,它成功了。我删除了一个conv层,并用spp层替换了正常的pool层。我还必须将测试批大小设置为1。准确率很高,但我的F1成绩下降了。我不知道这是否与我不得不使用的小测试批量有关


Net:

name: "TessDigitMean"
layer {
  name: "input"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/train_lmdb"
    batch_size: 1 #64
    backend: LMDB
  }
}
layer {
  name: "input"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/test_lmdb"
    batch_size: 1
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    pad_w: 2
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}

layer {
  name: "spatial_pyramid_pooling"
  type: "SPP"
  bottom: "conv1"
  top: "pool2"
  spp_param {
    pyramid_height: 2
  }
} 
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

我是对的,我的图像太小了。 我换了网,它成功了。我删除了一个conv层,并用spp层替换了正常的pool层。我还必须将测试批大小设置为1。准确率很高,但我的F1成绩下降了。我不知道这是否与我不得不使用的小测试批量有关


Net:

name: "TessDigitMean"
layer {
  name: "input"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/train_lmdb"
    batch_size: 1 #64
    backend: LMDB
  }
}
layer {
  name: "input"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "/Users/rvaldez/Documents/Datasets/Digits/SeperatedProviderV3_1020_SPP/784/caffe/test_lmdb"
    batch_size: 1
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    pad_w: 2
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}

layer {
  name: "spatial_pyramid_pooling"
  type: "SPP"
  bottom: "conv1"
  top: "pool2"
  spp_param {
    pyramid_height: 2
  }
} 
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

看来你是对的。您使用的图像太小。您可能会考虑填充您正在使用的CONV层,或者避免汇集以保持中间特征映射足够大。看来您是正确的。您使用的图像太小。您可能会考虑填充您正在使用的CONV层,或者避免汇集以保持中间特征映射足够大。