Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/312.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python:人脸反欺骗问题的预训练VGG人脸模型_Python_Tensorflow_Keras_Pre Trained Model_Vgg Net - Fatal编程技术网

Python:人脸反欺骗问题的预训练VGG人脸模型

Python:人脸反欺骗问题的预训练VGG人脸模型,python,tensorflow,keras,pre-trained-model,vgg-net,Python,Tensorflow,Keras,Pre Trained Model,Vgg Net,我试图通过使用预先训练的模型(例如,在ImageNet上训练的VGG)来解决人脸反欺骗问题。我需要在哪里检索功能?在哪一层之后?更具体地说,将最后一个全连接层的输出从2622更改为2是否足够?因为在face anti-Spooking问题中,我们有两个类(真/假) 实际上,在人脸反欺骗问题中使用预先训练的VGG人脸模型(在ImageNet上训练)是否有效?请任何教程或GitHub代码帮助我在Python中实现这一点?回答这个问题可能太迟了,但总比不回答好 如果样本太少或太多,这取决于您的数据集。

我试图通过使用预先训练的模型(例如,在ImageNet上训练的VGG)来解决人脸反欺骗问题。我需要在哪里检索功能?在哪一层之后?更具体地说,将最后一个全连接层的输出从2622更改为2是否足够?因为在face anti-Spooking问题中,我们有两个类(真/假)


实际上,在人脸反欺骗问题中使用预先训练的VGG人脸模型(在ImageNet上训练)是否有效?请任何教程或GitHub代码帮助我在Python中实现这一点?

回答这个问题可能太迟了,但总比不回答好

如果样本太少或太多,这取决于您的数据集。通常,当您的数据量有限和/或希望在提取样本的大部分特征以获得更高精度的同时避免过度拟合时,建议使用预先训练的模型。 如果您正在使用Keras,请尝试使用VGG16:

conv_net = VGG16(weights="imagenet", 
                 include_top=False,
                 input_shape=(150, 150, 3)) # Change the shape accordingly
它为您提供了这样一个层堆栈:

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 150, 150, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 150, 150, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 150, 150, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 75, 75, 64)        0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 75, 75, 128)       73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 75, 75, 128)       147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 37, 37, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 37, 37, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 37, 37, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 37, 37, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 18, 18, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 18, 18, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 18, 18, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 18, 18, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 9, 9, 512)         0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 9, 9, 512)         2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 9, 9, 512)         2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 9, 9, 512)         2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 4, 4, 512)         0         
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
要使用此模型,您有两种选择,一种是仅使用此模型提取特征并将其保存在磁盘上,然后在下一步中创建密集连接的层,并将上一步的输出提供给模型。这种方法比我将要解释的下一种方法快得多,但唯一的缺点是不能使用数据扩充。这就是使用
conv_net
predict
方法提取特征的方法:

features_batch = conv_base.predict(inputs_batch)
# Save the features in a tensor and feed them to the Dense Layer after all has been extracted
第二种选择是将密集连接的模型连接到VGG模型的顶部,释放
conv_net
层,并将数据正常地传送到网络,这样您就可以使用数据增强功能,但只有在访问强大的GPU或云时才可以使用它。下面是如何冻结和连接VGG顶部致密层的代码:

#codes adopted from "Deep Learning with Python" book
from keras import models
from keras import layers
conv_base.trainable = False
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
您甚至可以通过解冻
conv_net
的其中一层来微调模型,以适应您的数据。以下是如何冻结除一层以外的所有层:

conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
    if layer.name == 'block5_conv1':
        set_trainable = True
    if set_trainable:
        layer.trainable = True
    else:
        layer.trainable = False
# your model like before
希望它能帮助你开始