Python Tensorflow 2.0:如何实现功能级融合网络?

Python Tensorflow 2.0:如何实现功能级融合网络?,python,deep-learning,tensorflow2.0,Python,Deep Learning,Tensorflow2.0,我正试图在tensorflow中实现一个预测任务的小模型,其中有两个信号作为输入,分别经过几层,然后在以后的层中组合以生成输出预测。基本上,该模型的工作原理如下: (Signal A) -> [L 1] -> [L 2] -> ... -> [L k] \ \

我正试图在tensorflow中实现一个预测任务的小模型,其中有两个信号作为输入,分别经过几层,然后在以后的层中组合以生成输出预测。基本上,该模型的工作原理如下:

(Signal A) -> [L 1] -> [L 2] -> ... -> [L k] 
                                            \
                                             \
                                               -> [L k+1] ->...-> [Final Layer] -> Output
                                             /
                                            /
(Signal B) -> [L 1] -> [L 2] -> ... -> [L k]

其中[li]是网络的不同层。在融合之前,网络的第一部分对于两个信号都是相同的。在tensorflow 2.0中实现此模型的正确方法是什么?我相信在这个场景中,Sequential不是一个选项,但是我可以通过函数API来实现它,还是应该通过模型子类化来实现它?从我所读到的内容来看,这两种方法似乎没有太大区别。

这是functional API中模型的模板,您可以根据需要更改层

您的基本型号(两者通用)——

网络摘要:

Model: "model_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_6 (InputLayer)         [(None, 256, 1)]          0         
_________________________________________________________________
conv1d_12 (Conv1D)           (None, 254, 64)           256       
_________________________________________________________________
max_pooling1d_12 (MaxPooling (None, 127, 64)           0         
_________________________________________________________________
batch_normalization_12 (Batc (None, 127, 64)           256       
_________________________________________________________________
conv1d_13 (Conv1D)           (None, 125, 128)          24704     
_________________________________________________________________
max_pooling1d_13 (MaxPooling (None, 62, 128)           0         
_________________________________________________________________
batch_normalization_13 (Batc (None, 62, 128)           512       
_________________________________________________________________
conv1d_14 (Conv1D)           (None, 60, 128)           49280     
_________________________________________________________________
max_pooling1d_14 (MaxPooling (None, 30, 128)           0         
_________________________________________________________________
batch_normalization_14 (Batc (None, 30, 128)           512       
_________________________________________________________________
conv1d_15 (Conv1D)           (None, 28, 256)           98560     
_________________________________________________________________
max_pooling1d_15 (MaxPooling (None, 14, 256)           0         
_________________________________________________________________
batch_normalization_15 (Batc (None, 14, 256)           1024      
_________________________________________________________________
flatten_3 (Flatten)          (None, 3584)              0         
_________________________________________________________________
dense_3 (Dense)              (None, 64)                229440    
=================================================================
Total params: 404,544
Trainable params: 403,392
Non-trainable params: 1,152
第二融合网络-


left_input = Input(input_shape)
right_input = Input(input_shape)

# encode each of the two inputs into a vector with the base conv model
encoded_l = conv_base(left_input)
encoded_r = conv_base(right_input)



fusion = Concatenate()([encoded_l,encoded_r]) # this can be any other fusion method too

prediction = Dense(1, activation='sigmoid')(fusion)

twin_net = Model([left_input,right_input],prediction)

optimizer = Adam(0.001)

twin_net.compile(loss="binary_crossentropy",optimizer=optimizer)

twin_net.summary()

回答得好!不过,有一个小细节我想更好地理解。对于
conv_base
,可以使用
sig_input=input(input_shape)
启动模型。稍后,当创建编码的l和编码的r时,为什么需要使用
conv_base(Input(Input_shape))
创建它?哦,还有另一个问题:假设我使用numpy数组作为输入,我对Model.fit的输入应该如何组织?目前,我的输入是一个(N,S,V)numpy数组,其中N是采样数,S是信号数,V是信号的维数。我应该如何重塑它以将其作为输入传递到twin_net?在functional API中,每次创建模型实例时,都需要至少有一个输入层。因此,conv_base是两个信号的子模型或公共模型,但它是一个模型,因此需要输入,同样在最终架构中,我们使用公共conv_base子模型设计了另一个模型,因此我们也需要输入。我认为这可能不正确,N=样本数,s!=信号数量、样本数量和信号数量似乎相同。我想你的意思是,N=样本数,S=每个信号的时间步数,V=信号的尺寸或每个时间步的特征。如果这是你的输入形状,你不需要重塑,因为这正是模型想要的输入numpy数组形状。对不起,我表达得很糟糕。我所说的样本是指“训练样本”,我习惯称之为样本,但对于信号来说,这是一个糟糕的词语选择。在一个具体的例子中,我有大约1M个训练示例(so N=1M),每个示例由2个信号(so S=2)组成,每个信号由150个特征组成(so V=150。它们是以5Hz采样的30s信号)。在这个场景中,我使用了
input\u shape=(150,1)
(我使用了一个重塑层)。我尝试输入(N,S,V)numpy数组到模型中,但得到了一个tensorflow错误!

left_input = Input(input_shape)
right_input = Input(input_shape)

# encode each of the two inputs into a vector with the base conv model
encoded_l = conv_base(left_input)
encoded_r = conv_base(right_input)



fusion = Concatenate()([encoded_l,encoded_r]) # this can be any other fusion method too

prediction = Dense(1, activation='sigmoid')(fusion)

twin_net = Model([left_input,right_input],prediction)

optimizer = Adam(0.001)

twin_net.compile(loss="binary_crossentropy",optimizer=optimizer)

twin_net.summary()
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_7 (InputLayer)            [(None, 256, 1)]     0                                            
__________________________________________________________________________________________________
input_8 (InputLayer)            [(None, 256, 1)]     0                                            
__________________________________________________________________________________________________
model_2 (Model)                 (None, 64)           404544      input_7[0][0]                    
                                                                 input_8[0][0]                    
__________________________________________________________________________________________________
concatenate (Concatenate)       (None, 128)          0           model_2[1][0]                    
                                                                 model_2[2][0]                    
__________________________________________________________________________________________________
dense_4 (Dense)                 (None, 1)            129         concatenate[0][0]                
==================================================================================================
Total params: 404,673
Trainable params: 403,521
Non-trainable params: 1,152