Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/280.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在Keras中将模型合并为一个_Python_Tensorflow_Machine Learning_Keras - Fatal编程技术网

Python 在Keras中将模型合并为一个

Python 在Keras中将模型合并为一个,python,tensorflow,machine-learning,keras,Python,Tensorflow,Machine Learning,Keras,我必须用不同的优化器和隐藏层来训练两个模型:modelA和modelB。我想将输出作为它们之间的组合,结果如下 # w = weight I give to each model output_modelC = output_modelA * w + output_modelB * (1 - w) 两个模型共享相同的输入,但是,在创建了它们的编译之后,我不知道如何遵循它。 我的代码是: Input = keras.layers.Input(shape=(2,)) #modelA Hidden

我必须用不同的优化器和隐藏层来训练两个模型:modelA和modelB。我想将输出作为它们之间的组合,结果如下

# w = weight I give to each model
output_modelC = output_modelA * w + output_modelB * (1 - w)
两个模型共享相同的输入,但是,在创建了它们的编译之后,我不知道如何遵循它。 我的代码是:

Input = keras.layers.Input(shape=(2,))

#modelA
Hidden_A_1 = keras.layers.Dense(units=20)(Input)
Hidden_A_2 = keras.layers.Dense(units=20)(Hidden_A_1)
Output_A = keras.layers.Dense(units=1, activation='sigmoid')(Hidden_A_2)
optimizer_A = keras.optimizers.SGD(lr=0.00001, momentum=0.09, nesterov=True)
model_A = keras.Model(inputs=Input, outputs=Output_A)
model_A.compile(loss="binary_crossentropy",
                   optimizer=optimizer_slow,
                   metrics=['accuracy'])

#modelB
Hidden1_B = keras.layers.Dense(units=10, activation='relu')(Input)
Output_B = keras.layers.Dense(units=1, activation='sigmoid')(Hidden1_B)
model_B = keras.Model(inputs=Input, outputs=Output_B)
optimizer_B = keras.optimizers.Adagrad()
model_B.compile(loss="binary_crossentropy",
                   optimizer=optimizer_B,
                   metrics=['accuracy'])

假设您将提供w的值,以下代码可能会帮助您:

import keras 

Input = keras.layers.Input(shape=(784,))

#modelA
Hidden_A_1 = keras.layers.Dense(units=20)(Input)
Hidden_A_2 = keras.layers.Dense(units=20)(Hidden_A_1)
Output_A = keras.layers.Dense(units=1, activation='sigmoid')(Hidden_A_2)
optimizer_A = keras.optimizers.SGD(lr=0.00001, momentum=0.09, nesterov=True)
model_A = keras.Model(inputs=Input, outputs=Output_A)
model_A.compile(loss="binary_crossentropy",
                   optimizer=optimizer_A,
                   metrics=['accuracy'])

#modelB
Hidden1_B = keras.layers.Dense(units=10, activation='relu')(Input)
Output_B = keras.layers.Dense(units=1, activation='sigmoid')(Hidden1_B)
model_B = keras.Model(inputs=Input, outputs=Output_B)
optimizer_B = keras.optimizers.Adagrad()
model_B.compile(loss="binary_crossentropy",
                   optimizer=optimizer_B,
                   metrics=['accuracy'])

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')

model_A.fit(x_train,y_train)
model_B.fit(x_train,y_train)

w = 0.8
output_modelC = model_A.predict(x_test) * w + model_B.predict(x_test) * (1 - w)
样本输出:

array([[0.98165023],
       [0.9918817 ],
       [0.93426293],
       ...,
       [0.99940777],
       [0.9960805 ],
       [0.9992139 ]], dtype=float32)
这可能不是我选择的正确样本数据,但这只是为了展示如何将这两个网络结合起来

希望这有帮助