Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/sqlite/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python VGG16多输入图像网络_Python_Tensorflow_Machine Learning_Keras_Deep Learning - Fatal编程技术网

Python VGG16多输入图像网络

Python VGG16多输入图像网络,python,tensorflow,machine-learning,keras,deep-learning,Python,Tensorflow,Machine Learning,Keras,Deep Learning,我正在尝试使用VGG16网络来处理多个输入图像。 使用一个简单的CNN和2个输入来训练这个模型,给了我大约50%的acc。这就是为什么我想使用一个已建立的模型(如VGG16)来尝试它。 以下是我尝试过的: # imports from keras.applications.vgg16 import VGG16 from keras.models import Model from keras.layers import Conv2D, MaxPooling2D, Activation, Drop

我正在尝试使用VGG16网络来处理多个输入图像。 使用一个简单的CNN和2个输入来训练这个模型,给了我大约50%的acc。这就是为什么我想使用一个已建立的模型(如VGG16)来尝试它。
以下是我尝试过的:

# imports
from keras.applications.vgg16 import VGG16
from keras.models import Model
from keras.layers import Conv2D, MaxPooling2D, Activation, Dropout, Flatten, Dense

def def_model():
    model = VGG16(include_top=False, input_shape=(224, 224, 3))
    # mark loaded layers as not trainable
    for layer in model.layers:
        layer.trainable = False
    # return last pooling layer
    pool_layer = model.layers[-1].output
    return pool_layer

m1 = def_model()
m2 = def_model() 
m3 = def_model()

# add classifier layers
merge = concatenate([m1, m2, m3])

# optinal_conv = Conv2D(64, (3, 3), activation='relu', padding='same')(merge)
# optinal_pool = MaxPooling2D(pool_size=(2, 2))(optinal_conv)
# flatten = Flatten()(optinal_pool)

flatten = Flatten()(merge)
dense1 = Dense(512, activation='relu')(flatten)
dense2 = Dropout(0.5)(dense1)
output = Dense(1, activation='sigmoid')(dense2)


inshape1 = Input(shape=(224, 224, 3))
inshape2 = Input(shape=(224, 224, 3))
inshape3 = Input(shape=(224, 224, 3))
model = Model(inputs=[inshape1, inshape2, inshape3], outputs=output)

  • 我在调用
    Model
    函数时遇到此错误
  • 我知道该图是断开连接的,但我无法找到位置。
    以下是
    compile
    fit
    函数

    # compile model
    model.compile(optimizer="Adam", loss='binary_crossentropy', metrics=['accuracy'])
    model.fit([train1, train2, train3], train, 
               validation_data=([test1, test2, test3], ytest))
    
  • 我对一些行进行了评论:
    optional\u conv
    optional\u pool
    。在
    连接
    函数之后应用
    Conv2D
    MaxPooling2D
    会有什么影响

  • 我建议看看这个答案。以下是实现此目标的一种方法:

    # 3 inputs 
    input0 = tf.keras.Input(shape=(224, 224, 3), name="img0")
    input1 = tf.keras.Input(shape=(224, 224, 3), name="img1")
    input2 = tf.keras.Input(shape=(224, 224, 3), name="img2")
    concate_input = tf.keras.layers.Concatenate()([input0, input1, input2])
    # get 3 feature maps with same size (224, 224)
    # pretrained models needs that
    input = tf.keras.layers.Conv2D(3, (3, 3), 
                         padding='same', activation="relu")(concate_input)
    
    # pass that to imagenet model 
    vg = tf.keras.applications.VGG16(weights=None,
                                     include_top = False, 
                                     input_tensor = input)
    
    # do whatever 
    gap = tf.keras.layers.GlobalAveragePooling2D()(vg.output)
    den = tf.keras.layers.Dense(1, activation='sigmoid')(gap)
    
    # build the complete model 
    model = tf.keras.Model(inputs=[input0, input1, input2], outputs=den)
    

    谢谢@M.Innat的回答。我是否应该像在我的
    def_model()
    函数中那样
    将加载的层标记为不可训练的
    。如果您希望基础层不可训练,只需执行
    vg.trainable=False
    。您能够成功运行模型吗?是的,我做到了。谢谢,太好了。如果有帮助,也请投票,我们将不胜感激。如果您遇到任何进一步的问题,请随时询问。-)
    # 3 inputs 
    input0 = tf.keras.Input(shape=(224, 224, 3), name="img0")
    input1 = tf.keras.Input(shape=(224, 224, 3), name="img1")
    input2 = tf.keras.Input(shape=(224, 224, 3), name="img2")
    concate_input = tf.keras.layers.Concatenate()([input0, input1, input2])
    # get 3 feature maps with same size (224, 224)
    # pretrained models needs that
    input = tf.keras.layers.Conv2D(3, (3, 3), 
                         padding='same', activation="relu")(concate_input)
    
    # pass that to imagenet model 
    vg = tf.keras.applications.VGG16(weights=None,
                                     include_top = False, 
                                     input_tensor = input)
    
    # do whatever 
    gap = tf.keras.layers.GlobalAveragePooling2D()(vg.output)
    den = tf.keras.layers.Dense(1, activation='sigmoid')(gap)
    
    # build the complete model 
    model = tf.keras.Model(inputs=[input0, input1, input2], outputs=den)