Python 如何在编码器-解码器时间序列模型中添加一个点作为特征?

Python 如何在编码器-解码器时间序列模型中添加一个点作为特征?,python,deep-learning,lstm,Python,Deep Learning,Lstm,我一直在使用编码器-解码器LSTM架构执行seq2seq时间序列预测。模型的输入数据有两个特征,基本上是两个数组:一个是因变量(y值),另一个是自变量(x值)。阵列的形状为: input_shape: (57, 20, 2) 其中,例如,一个时间序列的x和y值为形状(1、20、2),其在3D阵列中的位置为: x = input_shape[:][:, 0] y = input_shape[:][:, 1] 我现在面临着一个挑战,即作为一个附加功能输入一个点(可以说是一个x-y时间步)。有什么

我一直在使用编码器-解码器LSTM架构执行seq2seq时间序列预测。模型的输入数据有两个特征,基本上是两个数组:一个是因变量(y值),另一个是自变量(x值)。阵列的形状为:

input_shape: (57, 20, 2)
其中,例如,一个时间序列的x和y值为形状(1、20、2),其在3D阵列中的位置为:

x = input_shape[:][:, 0]
y = input_shape[:][:, 1]
我现在面临着一个挑战,即作为一个附加功能输入一个点(可以说是一个x-y时间步)。有什么办法吗

编辑:我根据评论中的请求添加了我正在使用的模型。可能需要注意的是,为了简单起见,我在这里提到的输入大小很小。我使用的实际输入非常大。

model = Sequential()
model.add(Masking(mask_value=0, input_shape = (input_shape.shape[1], 2)))

model.add(Bidirectional(LSTM(128, dropout=0, return_sequences=True, activation='tanh')))
model.add(Bidirectional(LSTM(128, dropout=0, return_sequences=False)))

model.add((RepeatVector(targets.shape[1])))

model.add(Bidirectional(LSTM(128, dropout=0, return_sequences=True, activation='tanh')))
model.add(Bidirectional(LSTM(128, dropout=0, return_sequences=True)))

model.add(TimeDistributed(Dense(64, activation='relu')))
model.add(TimeDistributed(Dense(1, activation='linear')))
model.build()
model.compile(optimizer=optimizers.Adam(0.00001), loss = 'MAE')

我会给你的模型两个输入,第一个输入是你的正常时间序列(批次,20,2),第二个输入是你的特殊时间点(批次,2)。然后定义以下架构,将特殊点重复20次以获取(批处理,20,2),然后将其与正常输入连接起来。(注意,我定义了
target\u shape\u 1
,以确保它在我这边编译,但您可以将其替换为
target.shape[1]

模型的
summary()
如下所示:

Model: "model_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
key_time_point (InputLayer)     [(None, 2)]          0                                            
__________________________________________________________________________________________________
normal_inputs (InputLayer)      [(None, 20, 2)]      0                                            
__________________________________________________________________________________________________
key_time_repeater (RepeatVector (None, 20, 2)        0           key_time_point[0][0]             
__________________________________________________________________________________________________
tf_op_layer_concat_3 (TensorFlo [(None, 20, 4)]      0           normal_inputs[0][0]              
                                                                 key_time_repeater[0][0]          
__________________________________________________________________________________________________
masking_4 (Masking)             (None, 20, 4)        0           tf_op_layer_concat_3[0][0]       
__________________________________________________________________________________________________
bidirectional_12 (Bidirectional (None, 20, 256)      136192      masking_4[0][0]                  
__________________________________________________________________________________________________
bidirectional_13 (Bidirectional (None, 256)          394240      bidirectional_12[0][0]           
__________________________________________________________________________________________________
repeat_vector_11 (RepeatVector) (None, 3, 256)       0           bidirectional_13[0][0]           
__________________________________________________________________________________________________
bidirectional_14 (Bidirectional (None, 3, 256)       394240      repeat_vector_11[0][0]           
__________________________________________________________________________________________________
bidirectional_15 (Bidirectional (None, 3, 256)       394240      bidirectional_14[0][0]           
__________________________________________________________________________________________________
time_distributed_7 (TimeDistrib (None, 3, 64)        16448       bidirectional_15[0][0]           
__________________________________________________________________________________________________
time_distributed_8 (TimeDistrib (None, 3, 1)         65          time_distributed_7[0][0]         
==================================================================================================
Total params: 1,335,425
Trainable params: 1,335,425
Non-trainable params: 0
__________________________________________________________________________________________________

您的点是另一个特征还是滞后的y值?您是否试图将其作为另一组输入与编码器层的状态一起提供给编码器或解码器?该点应该是y值中的一个特殊点,其索引将是相应的x值。这一点是编码器的输入(就像其他2个功能一样),编码器的输出被反馈到解码器。我可能错了,但听起来你想实现类似于注意机制的东西,在你的输入中有一个特殊的时间步,你希望你的网络关注它。也许这会有帮助。如果你坚持你目前的方法lmk,我也许可以帮助你。我也有点困惑你想要的输入的形状是什么。您当前在每个时间步中都有y和x来给出形状(57,20,2),因此一个批次不应该是(1,20,2)而不是(1,20,1)?您是否试图将过去20个时间步中某个特殊时间步的y,x VAL添加到该20个时间步内的所有其他时间步中,以进行(57,20,4)输入?啊,是的,抱歉。我的错误。是的,每个批次的形状应该是(1,20,2),我希望添加特殊的x-y时间步,以使净输入为(57,20,4)。我在我的帖子中进行了编辑:)。此外,我一直在使用注意力机制,这可能无法解决问题。基本上,我在将一个点表示为一个附加功能时遇到了一些问题,其中其他(已经存在的)功能是数组。我没有任何数据可以提供给它以对其进行全面测试,但我认为它适合。lmk如果你有什么问题这看起来是个办法!我将在这里尝试并更新结果!我的模拟是运行自动取款机。不管它的预测是否正确,您确实帮助进行了数据表示的特定查询,至少可以说是非常慷慨的。非常感谢,@Jeff:)@harris总是乐于助人:]
Model: "model_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
key_time_point (InputLayer)     [(None, 2)]          0                                            
__________________________________________________________________________________________________
normal_inputs (InputLayer)      [(None, 20, 2)]      0                                            
__________________________________________________________________________________________________
key_time_repeater (RepeatVector (None, 20, 2)        0           key_time_point[0][0]             
__________________________________________________________________________________________________
tf_op_layer_concat_3 (TensorFlo [(None, 20, 4)]      0           normal_inputs[0][0]              
                                                                 key_time_repeater[0][0]          
__________________________________________________________________________________________________
masking_4 (Masking)             (None, 20, 4)        0           tf_op_layer_concat_3[0][0]       
__________________________________________________________________________________________________
bidirectional_12 (Bidirectional (None, 20, 256)      136192      masking_4[0][0]                  
__________________________________________________________________________________________________
bidirectional_13 (Bidirectional (None, 256)          394240      bidirectional_12[0][0]           
__________________________________________________________________________________________________
repeat_vector_11 (RepeatVector) (None, 3, 256)       0           bidirectional_13[0][0]           
__________________________________________________________________________________________________
bidirectional_14 (Bidirectional (None, 3, 256)       394240      repeat_vector_11[0][0]           
__________________________________________________________________________________________________
bidirectional_15 (Bidirectional (None, 3, 256)       394240      bidirectional_14[0][0]           
__________________________________________________________________________________________________
time_distributed_7 (TimeDistrib (None, 3, 64)        16448       bidirectional_15[0][0]           
__________________________________________________________________________________________________
time_distributed_8 (TimeDistrib (None, 3, 1)         65          time_distributed_7[0][0]         
==================================================================================================
Total params: 1,335,425
Trainable params: 1,335,425
Non-trainable params: 0
__________________________________________________________________________________________________