Keras 卷积1D到卷积2D

Keras 卷积1D到卷积2D,keras,conv-neural-network,tensorflow2.0,Keras,Conv Neural Network,Tensorflow2.0,总结问题 我有一个来自76000个数据点长的传感器的原始信号。我想 用CNN处理这些数据。要做到这一点,我想我可以使用Lambda层从原始信号(如 x = Lambda(lambda v: tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)))(x) 这完全有效。但我想更进一步,提前处理原始数据。希望卷积1D层能起到滤波器的作用,让一些频率通过,并阻止其他频率 我尝试的 我确实有两个独立的(Conv1

总结问题 我有一个来自76000个数据点长的传感器的原始信号。我想 用CNN处理这些数据。要做到这一点,我想我可以使用Lambda层从原始信号(如

x = Lambda(lambda v: tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)))(x)
这完全有效。但我想更进一步,提前处理原始数据。希望卷积1D层能起到滤波器的作用,让一些频率通过,并阻止其他频率

我尝试的 我确实有两个独立的(Conv1D示例用于原始数据处理,Conv2D示例用于处理STFT“图像”)启动并运行。但我想把这些结合起来

Conv1D,其中输入为:input=input(shape=(76000,))

同一输入

  x = Lambda(lambda v:tf.expand_dims(tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)),-1))(input)
  x = BatchNormalization()(x)
Model: "model_4"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_6 (InputLayer)         [(None, 76000)]           0         
_________________________________________________________________
lambda_8 (Lambda)            (None, 751, 513, 1)       0         
_________________________________________________________________
batch_normalization_3 (Batch (None, 751, 513, 1)       4         
_________________________________________________________________
. . .
. . . 
flatten_4 (Flatten)          (None, 1360)              0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 1360)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 1361      
我在寻找一种方法来结合从“conv1d”开始到“lambda_8”层。如果我把它们放在一起,我会得到:

  x = Lambda(lambda v: tf.expand_dims(v,-1))(input)
  x = layers.Conv1D(filters =10,kernel_size=100,activation = 'relu')(x)
  #x = Flatten()(x)
  x = Lambda(lambda v:tf.expand_dims(tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)),-1))(x)
Layer (type)                 Output Shape              Param #   
=================================================================
input_6 (InputLayer)         [(None, 76000)]           0         
_________________________________________________________________
lambda_17 (Lambda)           (None, 76000, 1)          0         
_________________________________________________________________
conv1d_6 (Conv1D)            (None, 75901, 10)         1010      
_________________________________________________________________
lambda_18 (Lambda)           (None, 75901, 0, 513, 1)  0         <-- Wrong
=================================================================
x=Lambda(Lambda v:tf.expand_dims(v,-1))(输入)
x=layers.Conv1D(过滤器=10,内核大小=100,激活='relu')(x)
#x=展平()(x)
x=Lambda(Lambda v:tf.expand_dims(tf.abs(tf.signal.stft(v,frame_length=frame_length,frame_step=frame_step)),-1))(x)
层(类型)输出形状参数
=================================================================
输入_6(输入层)[(无,76000)]0
_________________________________________________________________
lambda_17(lambda)(无,76000,1)0
_________________________________________________________________
conv1d_6(conv1d)(无,75901,10)1010
_________________________________________________________________
lambda_18(lambda)(无,75901,0,513,1)0来自,似乎
stft
只接受
(…,长度)
输入,不接受
(…,长度,通道)

因此,第一个建议是首先将通道移动到另一个维度,将长度保持在最后一个索引,并使函数工作

现在,当然,你需要匹配的长度,你不能匹配76000和75901。因此,第二个建议是在1D卷积中使用
padding='same'
,以保持长度相等

最后,由于在
stft
的结果中已经有10个通道,因此不需要在最后一个lambda中展开dims

总结:

1D部分

inputs = Input((76000,)) #(batch, 76000)

c1Out = Lambda(lambda x: K.expand_dims(x, axis=-1))(inputs) #(batch, 76000, 1)
c1Out = Conv1D(10, 100, activation = 'relu', padding='same')(c1Out) #(batch, 76000, 10)

#permute for putting length last, apply stft, put the channels back to their position
c1Stft = Permute((2,1))(c1Out) #(batch, 10, 76000)
c1Stft = x = Lambda(lambda v: tf.abs(tf.signal.stft(v,
                                                    frame_length=frame_length,
                                                    frame_step=frame_step)
                                     )
                    )(c1Stft) #(batch, 10, probably 751, probably 513)
c1Stft = Permute((2,3,1))(c1Stft) #(batch, 751, 513, 10)
2D部分,您的代码似乎正常:

c2Out = Lambda(lambda v: tf.expand_dims(tf.abs(tf.signal.stft(v,
                                                              frame_length=frame_length,
                                                              frame_step=frame_step)
                                               ),
                                        -1))(inputs) #(batch, 751, 513, 1)

现在所有东西都具有兼容的维度

#maybe
#c2Out = Conv2D(10, ..., padding='same')(c2Out) 

joined = Concatenate()([c1Stft, c2Out]) #(batch, 751, 513, 11) #maybe (batch, 751, 513, 20)

further = BatchNormalization()(joined)
further = Conv2D(...)(further)

警告:我不知道他们是否使
stft
可微,只有定义了梯度,
Conv1D
部分才会工作

谢谢,您解决了尺寸问题。你的意思是它在SFTT“层”上的反向传播会失败吗?我不知道。如果它定义了梯度,它就可以了(测试:)-我只是不太了解操作。
#maybe
#c2Out = Conv2D(10, ..., padding='same')(c2Out) 

joined = Concatenate()([c1Stft, c2Out]) #(batch, 751, 513, 11) #maybe (batch, 751, 513, 20)

further = BatchNormalization()(joined)
further = Conv2D(...)(further)