Machine learning 如何向现有模型添加更多层(例如,可教机器应用模型)?
我试图通过在输出层之前添加更多的层来使用可教机器应用程序中的google模型。 重新训练模型时,始终返回以下错误: ValueError:layer Density_25的输入0与层不兼容:输入形状的轴-1应具有值5,但接收到形状为[20512]的输入 以下是我的方法: 重新训练模型时,返回错误: 如果我在不添加新层的情况下重新训练模型,它就可以正常工作。Machine learning 如何向现有模型添加更多层(例如,可教机器应用模型)?,machine-learning,computer-vision,keras-layer,cnn,image-classification,Machine Learning,Computer Vision,Keras Layer,Cnn,Image Classification,我试图通过在输出层之前添加更多的层来使用可教机器应用程序中的google模型。 重新训练模型时,始终返回以下错误: ValueError:layer Density_25的输入0与层不兼容:输入形状的轴-1应具有值5,但接收到形状为[20512]的输入 以下是我的方法: 重新训练模型时,返回错误: 如果我在不添加新层的情况下重新训练模型,它就可以正常工作。 有人能告诉我是什么问题吗?最新答案 如果要在预先训练好的模型的两个层之间添加层,则不如使用add方法添加层简单。如果这样做,将导致不预期
有人能告诉我是什么问题吗?最新答案 如果要在预先训练好的模型的两个层之间添加层,则不如使用add方法添加层简单。如果这样做,将导致不预期的行为 误差分析: 如果您按照以下方式编译模型(按照您指定的方式): 模型摘要的输出:
Model: "sequential_12"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
sequential_9 (Sequential) (None, 1280) 410208
_________________________________________________________________
sequential_11 (Sequential) (None, 512) 131672
_________________________________________________________________
dense_12 (Dense) (None, 128) 768
_________________________________________________________________
dense_13 (Dense) (None, 32) 4128
_________________________________________________________________
dense_14 (Dense) (None, 5) 165
=================================================================
Total params: 546,941
Trainable params: 532,861
Non-trainable params: 14,080
_________________________________________________________________
这里一切看起来都不错,但仔细看:
for l in model.layers:
print("layer : ", l.name, ", expects input of shape : ",l.input_shape)
输出:
layer : sequential_9 , expects input of shape : (None, 224, 224, 3)
layer : sequential_11 , expects input of shape : (None, 1280)
layer : dense_12 , expects input of shape : (None, 5) <-- **PROBLEM**
layer : dense_13 , expects input of shape : (None, 128)
layer : dense_14 , expects input of shape : (None, 32)
模型摘要的输出:
Model: "functional_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
sequential_9 (Sequential) (None, 1280) 410208
_________________________________________________________________
dense_15 (Dense) (None, 512) 655872
_________________________________________________________________
dense_16 (Dense) (None, 128) 65664
_________________________________________________________________
dense_17 (Dense) (None, 32) 4128
_________________________________________________________________
dense_18 (Dense) (None, 1280) 42240
_________________________________________________________________
sequential_11 (Sequential) (None, 5) 128600
=================================================================
Total params: 1,306,712
Trainable params: 1,292,632
Non-trainable params: 14,080
谢谢你抽出时间。我确实听从了你的建议,但不幸的是,它返回了另一个错误。它返回了另一个错误:-->4 input=input(model.layers[0]。input_shape[0])296'输入,而不是同时两个输入。'))297如果批处理_input_shape为None,shape为None,tensor为None:->298 raise ValueError(“请提供以输入
形状
“299”或张量
参数。请注意,“300”形状
不包括批次“ValueError”:请提供以输入形状
或张量
参数。请注意形状
不包括在内包括批处理维度。哦,我忘了。layer.input\u shape返回第一个轴,表示可变批处理大小。不包括它将修复错误。它可以工作,但会破坏原始模型体系结构。让我们回到上面的原始模型,是否可以在sequential\u 9和sequential\u 11之间添加一些层而无需ruin它的架构?你可以从下面的github链接获得模型:太棒了!!!它很有效,非常感谢你的时间。还有一个问题,在以后输出之前添加密集层1280的原因,是为了遵循原始架构,因为顺序的_11预计连接1280个单元。我说的对吗?
sequential_1 = model.layers[0] # re-using pre-trained model
sequential_2 = model.layers[1]
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model
inp_sequential_1 = Input(sequential_1.layers[0].input_shape[1:])
out_sequential_1 = sequential_1(inp_sequential_1)
#adding layers in between sequential_9 and sequential_11
out_intermediate = Dense(512, activation="relu")(out_sequential_1)
out_intermediate = Dense(128, activation ="relu")(out_intermediate)
out_intermediate = Dense(32, activation ="relu")(out_intermediate)
# always make sure to include a layer with output shape matching input shape of sequential 11, in this case 1280
out_intermediate = Dense(1280, activation ="relu")(out_intermediate)
output = sequential_2(out_intermediate) # output of intermediate layers are given to sequential_11
final_model = Model(inputs=inp_sequential_1, outputs=output)
Model: "functional_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_5 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
sequential_9 (Sequential) (None, 1280) 410208
_________________________________________________________________
dense_15 (Dense) (None, 512) 655872
_________________________________________________________________
dense_16 (Dense) (None, 128) 65664
_________________________________________________________________
dense_17 (Dense) (None, 32) 4128
_________________________________________________________________
dense_18 (Dense) (None, 1280) 42240
_________________________________________________________________
sequential_11 (Sequential) (None, 5) 128600
=================================================================
Total params: 1,306,712
Trainable params: 1,292,632
Non-trainable params: 14,080