Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Tensorflow 多输入模型中以预训练模型作为单支路输入_Tensorflow_Keras_Deep Learning_Neural Network_Transfer Learning - Fatal编程技术网

Tensorflow 多输入模型中以预训练模型作为单支路输入

Tensorflow 多输入模型中以预训练模型作为单支路输入,tensorflow,keras,deep-learning,neural-network,transfer-learning,Tensorflow,Keras,Deep Learning,Neural Network,Transfer Learning,我有一个使用Keras函数API创建的预先训练过的模型 我想做的是: 重新训练预训练模型的层,并将其用于特定分支 使用BatchNormalization()对分支层进行规格化 Concat层并输出到单个层 问题: 这是一种有效的微调方法吗 模型构造: 模型架构 def train_model(input1, input2, input3): 'Load Pretrained Model' previous_model = keras.models.load_model('/

我有一个使用Keras函数API创建的预先训练过的模型

我想做的是:

  • 重新训练预训练模型的层,并将其用于特定分支
  • 使用BatchNormalization()对分支层进行规格化
  • Concat层并输出到单个层
  • 问题: 这是一种有效的微调方法吗

    模型构造: 模型架构

    def train_model(input1, input2, input3):
    
        'Load Pretrained Model'
        previous_model = keras.models.load_model('/home/john/Dupont_Internship/Transfer_Learning/Models/Meltome/meltome_fully_saved_embeddings_model/')
       
        #Freeze all model layer weights
        previous_model.trainable = False
    
        # saving the output layer for reconstruction later
        config_output_layer = previous_model.layers[-1].get_config()
        weights_output_layer = previous_model.layers[-1].get_weights()
    
        #Configure Model Input
        input1 = np.expand_dims(input1,1)
        input3 = np.expand_dims(input3, 1)
    
        # Define Input Layers for DNN
        input1 = Input(shape = (input1.shape[1],), name = "input1")
        input2 = Input(shape = (input2.shape[1],), name = "input2")
        input3 = Input(shape = (input3.shape[1],), name = "input3")
    
        inputs = [input1, input2, input3]
    
        # First Branch of DNN (input1)
        x = BatchNormalization()(input1) 
        x = Dense(units = 5, activation='relu', kernel_regularizer=regularizers.l2(0.01))(x)
        x = Dropout(0.08)(x)
    
        # Second Branch of DNN (input2)
        y = previous_model(input2)
        #y = Dense(units = 256, activation = "relu", kernel_regularizer=regularizers.l2(0.01))(y)
        #y = Dropout(0.08)(y)
        #y = BatchNormalization(y)
        
        #Third Branch of DNN (Detergents)
        z = BatchNormalization()(input3) 
        z = Dense(units = 3, activation='relu', kernel_regularizer=regularizers.l2(0.01))(z)
        z = Dropout(0.08)(z)
    
        # Merge the input models into a single large vector
        concatenated = Concatenate()([x, y, z])
        
        #Apply Final Output Layer
        outputs = Dense(1, name = "output")(concatenated)
    
        # Create an Interpretation Model (Accepts the inputs from previous branches and has single output)
        top_model = Model(inputs = inputs, outputs = outputs)
        
        # Compile the Model
        top_model.compile(loss='mse', optimizer = Adam(lr = 0.0001), metrics = ['mse'])
    
        # Summarize the Model Summary
        top_model.summary()
        
        return top_model