Numpy 将VGG16形状输出从4096个特征转换为2048

Numpy 将VGG16形状输出从4096个特征转换为2048,numpy,keras,deep-learning,neural-network,conv-neural-network,Numpy,Keras,Deep Learning,Neural Network,Conv Neural Network,我正在尝试使用VGG16预训练模型进行图像分类,并将特征哑到csv文件中,但我面临着特征数量的问题,我试图获得2048特性,而不是4096特性我读过一篇小文章,上面说我可以从vgg16模型中删除一层,然后我可以获得2048特性,但我被这件事困住了,有人能纠正我吗 def read_images(folder_path, classlbl): # load all images into a list images = [] img_width, im

我正在尝试使用VGG16预训练模型进行图像分类,并将特征哑到csv文件中,但我面临着特征数量的问题,我试图获得2048特性,而不是4096特性我读过一篇小文章,上面说我可以从vgg16模型中删除一层,然后我可以获得2048特性,但我被这件事困住了,有人能纠正我吗

def read_images(folder_path, classlbl):
       # load all images into a list
        images = []
        img_width, img_height = 224, 224
        class1=[]
        for img in os.listdir(folder_path):
            img = os.path.join(folder_path, img)
            img = load_img(img, target_size=(img_width, img_height))
            class1.append(classlbl)# class one.
            images.append(img)
        return images, class1  
def computefeatures(model,image):
   # convert the image pixels to a numpy array
    image = img_to_array(image)
    # reshape data for the model
    image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
    # prepare the image for the VGG model
    image = preprocess_input(image)

    # get extracted features
    features = model.predict(image)
    return features

# load model
model = VGG16()

# remove the output layer
model.layers.pop()
model = Model(inputs=model.inputs, outputs=model.layers[-1].output)

# call the image read and 
folder_path = '/content/Images'
classlbl=5

images, class1 =read_images(folder_path, classlbl)
# call the fucntion to compute the features for each image. 
list_features1=[]
list_features1 = np.empty((0,4096), float)# create an empty array with 0 row and 4096 columns this number from fature
# extraction from vg16 
for img in range(len(images)):
    f2=computefeatures(model,images[img]) # compute features forea each image
    #list_features1=np.append(list_features1, f2, axis=1)
    #list_features=np.vstack((list_features, f2))
    list_features1 = np.append(list_features1, f2, axis=0)

classes1 = []
count = 0
for i in range(156):
    if count >= 0 and count <= 156:
        classes1.append(5)
    count = count + 1
print(len(classes1))
df1= pd.DataFrame(list_features1,columns=list(range(1,4097)))
df1.head()
期望输出:

1       2       3       4       2048
0.12    0.23    0.345   0.5372  0.21111
0.2313  0.321   0.214   0.3542  0.46756
.
.
注意:如果我直接将其替换为2048
list\u features1=np.empty((02048),float)
它将返回错误:

all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 2048 and the array at index 1 has size 409
这是我的模型架构:

Model: "vgg16"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_8 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1 (Dense)                  (None, 4096)              102764544 
_________________________________________________________________
fc2 (Dense)                  (None, 4096)              16781312  
_________________________________________________________________
predictions (Dense)          (None, 1000)              4097000   
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0

最简单的方法是在4096之后添加一个致密层,该层在输出层之前只有2096个特征。在此过程中,我将保持原始模型的权重固定。 为了实现这一点,您可以像以前一样计算特征,将它们作为第二个模型的输入,该模型具有以下结构(假设存在两类问题)

层(类型)输出形状参数
=================================================================
输入_11(输入层)[(无,4096)]0
_________________________________________________________________
密集型_13(密集型)(无,2096)8587312
_________________________________________________________________

稠密层14(稠密)(无,2)4194 35;最容易在4096之后添加稠密层,而4096在输出层之前只有2096个特征。在此过程中,我将保持原始模型的权重固定。 为了实现这一点,您可以像以前一样计算特征,将它们作为第二个模型的输入,该模型具有以下结构(假设存在两类问题)

层(类型)输出形状参数
=================================================================
输入_11(输入层)[(无,4096)]0
_________________________________________________________________
密集型_13(密集型)(无,2096)8587312
_________________________________________________________________

密集(密集)(无,2)4194 35;我不确定我是否理解你的问题,你可以在
model.layers
上使用
pop()
,然后使用
model.layers[-1]。输出
创建新层

vgg16_model = keras.applications.vgg16.VGG16()

model = Sequential()

for layer in vgg16_model.layers[:-1]:
    model.add(layer)

model.layers.pop()


# Freeze the layers 
for layer in model.layers:
    layer.trainable = False


# Add 'softmax' instead of earlier 'prediction' layer.
model.add(Dense(2048, activation='softmax'))


# Check the summary, and yes new layer has been added. 
model.summary()

我不确定我是否理解您的问题,您可以在
model.layers
上使用
pop()
,然后使用
model.layers[-1]。输出
来创建新层

vgg16_model = keras.applications.vgg16.VGG16()

model = Sequential()

for layer in vgg16_model.layers[:-1]:
    model.add(layer)

model.layers.pop()


# Freeze the layers 
for layer in model.layers:
    layer.trainable = False


# Add 'softmax' instead of earlier 'prediction' layer.
model.add(Dense(2048, activation='softmax'))


# Check the summary, and yes new layer has been added. 
model.summary()

你能基于数据进行再培训吗?上面的代码运行良好,但问题是我得到的是4096个功能,而不是2048,是的,我可以基于数据进行再培训吗?上面的代码运行良好,但问题是我得到的是4096个功能,而不是2048,是的,我可以谢谢你,但我期待的是2048,而不是2096,你能告诉我我在代码中需要做什么样的更改吗谢谢,但我期待的是2048年,而不是2096年,你能告诉我在代码中需要做什么样的更改吗
vgg16_model = keras.applications.vgg16.VGG16()

model = Sequential()

for layer in vgg16_model.layers[:-1]:
    model.add(layer)

model.layers.pop()


# Freeze the layers 
for layer in model.layers:
    layer.trainable = False


# Add 'softmax' instead of earlier 'prediction' layer.
model.add(Dense(2048, activation='softmax'))


# Check the summary, and yes new layer has been added. 
model.summary()