Python 如何从训练图像及其标签创建CAFEMODEL文件?
我现在从事基于开源软件的年龄分类工作 python代码具有Python 如何从训练图像及其标签创建CAFEMODEL文件?,python,python-2.7,face-recognition,caffe,Python,Python 2.7,Face Recognition,Caffe,我现在从事基于开源软件的年龄分类工作 python代码具有 age_net_pretrained='./age_net.caffemodel' age_net_model_file='./deploy_age.prototxt' age_net = caffe.Classifier(age_net_model_file, age_net_pretrained, channel_swap=(2,1,0), raw_scale=255, image_dims
age_net_pretrained='./age_net.caffemodel'
age_net_model_file='./deploy_age.prototxt'
age_net = caffe.Classifier(age_net_model_file, age_net_pretrained,
channel_swap=(2,1,0),
raw_scale=255,
image_dims=(256, 256))
其中.prototxt
文件如下所示。我保留了一个文件“.caffemodel”
。作为源代码,他以前提供过。但是,我想根据我的人脸数据库再次创建它。你能提供一些教程或者一些方法来创建它吗?我假设我有一个文件夹图像,其中包括100个图像,并分属于每个年龄组(1到1),例如
这是prototxt文件。提前谢谢
name: "CaffeNet"
input: "data"
input_dim: 1
input_dim: 3
input_dim: 227
input_dim: 227
layers {
name: "conv1"
type: CONVOLUTION
bottom: "data"
top: "conv1"
convolution_param {
num_output: 96
kernel_size: 7
stride: 4
}
}
layers {
name: "relu1"
type: RELU
bottom: "conv1"
top: "conv1"
}
layers {
name: "pool1"
type: POOLING
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "norm1"
type: LRN
bottom: "pool1"
top: "norm1"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: "conv2"
type: CONVOLUTION
bottom: "norm1"
top: "conv2"
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
}
}
layers {
name: "relu2"
type: RELU
bottom: "conv2"
top: "conv2"
}
layers {
name: "pool2"
type: POOLING
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "norm2"
type: LRN
bottom: "pool2"
top: "norm2"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: "conv3"
type: CONVOLUTION
bottom: "norm2"
top: "conv3"
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
}
}
layers{
name: "relu3"
type: RELU
bottom: "conv3"
top: "conv3"
}
layers {
name: "pool5"
type: POOLING
bottom: "conv3"
top: "pool5"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "fc6"
type: INNER_PRODUCT
bottom: "pool5"
top: "fc6"
inner_product_param {
num_output: 512
}
}
layers {
name: "relu6"
type: RELU
bottom: "fc6"
top: "fc6"
}
layers {
name: "drop6"
type: DROPOUT
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: "fc7"
type: INNER_PRODUCT
bottom: "fc6"
top: "fc7"
inner_product_param {
num_output: 512
}
}
layers {
name: "relu7"
type: RELU
bottom: "fc7"
top: "fc7"
}
layers {
name: "drop7"
type: DROPOUT
bottom: "fc7"
top: "fc7"
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: "fc8"
type: INNER_PRODUCT
bottom: "fc7"
top: "fc8"
inner_product_param {
num_output: 8
}
}
layers {
name: "prob"
type: SOFTMAX
bottom: "fc8"
top: "prob"
}
要想得到一个caffemodel,你需要训练网络。prototxt文件仅用于部署模型,不能用于训练模型 您需要添加一个指向数据库的数据层。要使用您提到的文件列表,层的源应为HDF5。您可能需要添加一个具有平均值的变换参数。为了提高效率,可以使用LMDB或LevelDB数据库替换图像文件 在网络结束时,您必须将“prob”层替换为“loss”层。大概是这样的: 层{ 名称:“损失” 类型:SoftmaxWithLoss 底部:“fc8” 顶部:“损失” } 图层目录可在此处找到: 或者,由于您的网络是众所周知的。。。看看这个教程:P
caffe(“train_val.prototxt”)中包含正确的培训用prototxt文件。要获得caffe模型,需要对网络进行培训。prototxt文件仅用于部署模型,不能用于训练模型 您需要添加一个指向数据库的数据层。要使用您提到的文件列表,层的源应为HDF5。您可能需要添加一个具有平均值的变换参数。为了提高效率,可以使用LMDB或LevelDB数据库替换图像文件 在网络结束时,您必须将“prob”层替换为“loss”层。大概是这样的: 层{ 名称:“损失” 类型:SoftmaxWithLoss 底部:“fc8” 顶部:“损失” } 图层目录可在此处找到: 或者,由于您的网络是众所周知的。。。看看这个教程:P
caffe中包含用于培训的正确prototxt文件(“train_val.prototxt”)。您说此prototxt文件仅用于部署模型。这是因为指定的第一个输入尺寸为1,并且为了进行培训,您需要指定一个合理的批量大小吗?嗨,我是这方面的新手。我想创建我自己的.caffemodel并将其转换为CoreML。我该怎么做?请引导我。你说这个prototxt文件只是为了部署模型。这是因为指定的第一个输入尺寸为1,并且为了进行培训,您需要指定一个合理的批量大小吗?嗨,我是这方面的新手。我想创建我自己的.caffemodel并将其转换为CoreML。我该怎么做?请引导我。
name: "CaffeNet"
input: "data"
input_dim: 1
input_dim: 3
input_dim: 227
input_dim: 227
layers {
name: "conv1"
type: CONVOLUTION
bottom: "data"
top: "conv1"
convolution_param {
num_output: 96
kernel_size: 7
stride: 4
}
}
layers {
name: "relu1"
type: RELU
bottom: "conv1"
top: "conv1"
}
layers {
name: "pool1"
type: POOLING
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "norm1"
type: LRN
bottom: "pool1"
top: "norm1"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: "conv2"
type: CONVOLUTION
bottom: "norm1"
top: "conv2"
convolution_param {
num_output: 256
pad: 2
kernel_size: 5
}
}
layers {
name: "relu2"
type: RELU
bottom: "conv2"
top: "conv2"
}
layers {
name: "pool2"
type: POOLING
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "norm2"
type: LRN
bottom: "pool2"
top: "norm2"
lrn_param {
local_size: 5
alpha: 0.0001
beta: 0.75
}
}
layers {
name: "conv3"
type: CONVOLUTION
bottom: "norm2"
top: "conv3"
convolution_param {
num_output: 384
pad: 1
kernel_size: 3
}
}
layers{
name: "relu3"
type: RELU
bottom: "conv3"
top: "conv3"
}
layers {
name: "pool5"
type: POOLING
bottom: "conv3"
top: "pool5"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layers {
name: "fc6"
type: INNER_PRODUCT
bottom: "pool5"
top: "fc6"
inner_product_param {
num_output: 512
}
}
layers {
name: "relu6"
type: RELU
bottom: "fc6"
top: "fc6"
}
layers {
name: "drop6"
type: DROPOUT
bottom: "fc6"
top: "fc6"
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: "fc7"
type: INNER_PRODUCT
bottom: "fc6"
top: "fc7"
inner_product_param {
num_output: 512
}
}
layers {
name: "relu7"
type: RELU
bottom: "fc7"
top: "fc7"
}
layers {
name: "drop7"
type: DROPOUT
bottom: "fc7"
top: "fc7"
dropout_param {
dropout_ratio: 0.5
}
}
layers {
name: "fc8"
type: INNER_PRODUCT
bottom: "fc7"
top: "fc8"
inner_product_param {
num_output: 8
}
}
layers {
name: "prob"
type: SOFTMAX
bottom: "fc8"
top: "prob"
}