Python 在KERAS中连接/组合两种转移倾斜模型

Python 在KERAS中连接/组合两种转移倾斜模型,python,tensorflow,deep-learning,keras,pre-trained-model,Python,Tensorflow,Deep Learning,Keras,Pre Trained Model,我们如何在KERAS的迁移学习中加入/组合两种模式 我有两种型号: 模型1=我的模型 模型2=经过训练的模型 我可以通过将模型2作为输入,然后将其输出传递给模型1来组合这些模型,这是传统的方法 然而,我是用另一种方式做的。我想将模型1作为输入,然后将其输出传递给模型2(即经过训练的模型1) 这是完全相同的过程,只要确保您的模型的输出与另一个模型的输入具有相同的形状即可 from keras.models import Model output = model2(model1.outputs)

我们如何在KERAS的迁移学习中加入/组合两种模式

我有两种型号: 模型1=我的模型 模型2=经过训练的模型

我可以通过将模型2作为输入,然后将其输出传递给模型1来组合这些模型,这是传统的方法


然而,我是用另一种方式做的。我想将模型1作为输入,然后将其输出传递给模型2(即经过训练的模型1)

这是完全相同的过程,只要确保您的模型的输出与另一个模型的输入具有相同的形状即可

from keras.models import Model

output = model2(model1.outputs)
joinedModel = Model(model1.inputs,output)
在编译之前,确保(如果这是您想要的),使模型2中的所有层都具有
trainable=False
,这样训练就不会改变已经训练过的模型


测试代码:

from keras.layers import *
from keras.models import Sequential, Model

#creating model 1 and model 2 -- the "previously existing models"
m1 = Sequential()
m2 = Sequential()
m1.add(Dense(20,input_shape=(50,)))
m1.add(Dense(30))
m2.add(Dense(5,input_shape=(30,)))
m2.add(Dense(11))

#creating model 3, joining the models 
out2 = m2(m1.outputs)
m3 = Model(m1.inputs,out2)

#checking out the results
m3.summary()

#layers in model 3
print("\nthe main model:")
for i in m3.layers:
    print(i.name)

#layers inside the last layer of model 3
print("\ninside the submodel:")
for i in m3.layers[-1].layers:
    print(i.name)
Layer (type)                 Output Shape              Param #   
=================================================================
dense_21_input (InputLayer)  (None, 50)                0         
_________________________________________________________________
dense_21 (Dense)             (None, 20)                1020      
_________________________________________________________________
dense_22 (Dense)             (None, 30)                630       
_________________________________________________________________
sequential_12 (Sequential)   (None, 11)                221       
=================================================================
Total params: 1,871
Trainable params: 1,871
Non-trainable params: 0
_________________________________________________________________

the main model:
dense_21_input
dense_21
dense_22
sequential_12

inside the submodel:
dense_23
dense_24
输出:

from keras.layers import *
from keras.models import Sequential, Model

#creating model 1 and model 2 -- the "previously existing models"
m1 = Sequential()
m2 = Sequential()
m1.add(Dense(20,input_shape=(50,)))
m1.add(Dense(30))
m2.add(Dense(5,input_shape=(30,)))
m2.add(Dense(11))

#creating model 3, joining the models 
out2 = m2(m1.outputs)
m3 = Model(m1.inputs,out2)

#checking out the results
m3.summary()

#layers in model 3
print("\nthe main model:")
for i in m3.layers:
    print(i.name)

#layers inside the last layer of model 3
print("\ninside the submodel:")
for i in m3.layers[-1].layers:
    print(i.name)
Layer (type)                 Output Shape              Param #   
=================================================================
dense_21_input (InputLayer)  (None, 50)                0         
_________________________________________________________________
dense_21 (Dense)             (None, 20)                1020      
_________________________________________________________________
dense_22 (Dense)             (None, 30)                630       
_________________________________________________________________
sequential_12 (Sequential)   (None, 11)                221       
=================================================================
Total params: 1,871
Trainable params: 1,871
Non-trainable params: 0
_________________________________________________________________

the main model:
dense_21_input
dense_21
dense_22
sequential_12

inside the submodel:
dense_23
dense_24

这是完全相同的过程,只需确保您的模型的输出与其他模型的输入具有相同的形状

from keras.models import Model

output = model2(model1.outputs)
joinedModel = Model(model1.inputs,output)
在编译之前,确保(如果这是您想要的),使模型2中的所有层都具有
trainable=False
,这样训练就不会改变已经训练过的模型


测试代码:

from keras.layers import *
from keras.models import Sequential, Model

#creating model 1 and model 2 -- the "previously existing models"
m1 = Sequential()
m2 = Sequential()
m1.add(Dense(20,input_shape=(50,)))
m1.add(Dense(30))
m2.add(Dense(5,input_shape=(30,)))
m2.add(Dense(11))

#creating model 3, joining the models 
out2 = m2(m1.outputs)
m3 = Model(m1.inputs,out2)

#checking out the results
m3.summary()

#layers in model 3
print("\nthe main model:")
for i in m3.layers:
    print(i.name)

#layers inside the last layer of model 3
print("\ninside the submodel:")
for i in m3.layers[-1].layers:
    print(i.name)
Layer (type)                 Output Shape              Param #   
=================================================================
dense_21_input (InputLayer)  (None, 50)                0         
_________________________________________________________________
dense_21 (Dense)             (None, 20)                1020      
_________________________________________________________________
dense_22 (Dense)             (None, 30)                630       
_________________________________________________________________
sequential_12 (Sequential)   (None, 11)                221       
=================================================================
Total params: 1,871
Trainable params: 1,871
Non-trainable params: 0
_________________________________________________________________

the main model:
dense_21_input
dense_21
dense_22
sequential_12

inside the submodel:
dense_23
dense_24
输出:

from keras.layers import *
from keras.models import Sequential, Model

#creating model 1 and model 2 -- the "previously existing models"
m1 = Sequential()
m2 = Sequential()
m1.add(Dense(20,input_shape=(50,)))
m1.add(Dense(30))
m2.add(Dense(5,input_shape=(30,)))
m2.add(Dense(11))

#creating model 3, joining the models 
out2 = m2(m1.outputs)
m3 = Model(m1.inputs,out2)

#checking out the results
m3.summary()

#layers in model 3
print("\nthe main model:")
for i in m3.layers:
    print(i.name)

#layers inside the last layer of model 3
print("\ninside the submodel:")
for i in m3.layers[-1].layers:
    print(i.name)
Layer (type)                 Output Shape              Param #   
=================================================================
dense_21_input (InputLayer)  (None, 50)                0         
_________________________________________________________________
dense_21 (Dense)             (None, 20)                1020      
_________________________________________________________________
dense_22 (Dense)             (None, 30)                630       
_________________________________________________________________
sequential_12 (Sequential)   (None, 11)                221       
=================================================================
Total params: 1,871
Trainable params: 1,871
Non-trainable params: 0
_________________________________________________________________

the main model:
dense_21_input
dense_21
dense_22
sequential_12

inside the submodel:
dense_23
dense_24

这个问题已经解决了

我使用了
model.add()
函数,然后添加了模型1和模型2所需的所有层

下面的代码将在模型1之后添加模型2的第一个10层

模型2中i的
。层[:10]:

型号。添加(i)

问题已解决

我使用了
model.add()
函数,然后添加了模型1和模型2所需的所有层

下面的代码将在模型1之后添加模型2的第一个10层

模型2中i的
。层[:10]:

model.add(i)

我以前也尝试过同样的方法。但是,它只是将model2的最后一层添加到Model1中。我已经发布了我的答案。忘记顺序模型吧,它们纯粹是限制。转到上面显示的代码。我确实按照您的建议使用了模型API。它不起作用,只包括最后一层。它包括所有层。摘要显示整个模型的名称,而不是单个图层的名称。-查看测试代码。我以前也尝试过同样的方法。但是,它只是将model2的最后一层添加到Model1中。我已经发布了我的答案。忘记顺序模型吧,它们纯粹是限制。转到上面显示的代码。我确实按照您的建议使用了模型API。它不起作用,只包括最后一层。它包括所有层。摘要显示整个模型的名称,而不是单个图层的名称。-请参阅测试代码。