Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/wix/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何根据2D形状的系列输入正确预测一个值?_Python_Tensorflow_Keras_Deep Learning_Prediction - Fatal编程技术网

Python 如何根据2D形状的系列输入正确预测一个值?

Python 如何根据2D形状的系列输入正确预测一个值?,python,tensorflow,keras,deep-learning,prediction,Python,Tensorflow,Keras,Deep Learning,Prediction,我使用的是一个编码器-解码器架构,编码器和解码器中各有3层,每个隐藏层中有128个神经元。输入是二维形式的:第一列有天数,第二列有依赖于天数的时间序列(形状:(5780100,2))。输出是第一列值中的单个值,表示发生断点的特定日期(形状:(5780,1,1))。断点是时间相关值之一,即第二列 更好的输入图片是: array([[ 0. , 1. ], [ 2. , 1.14469799], [ 4.

我使用的是一个编码器-解码器架构,编码器和解码器中各有3层,每个隐藏层中有128个神经元。输入是二维形式的:第一列有天数,第二列有依赖于天数的时间序列(形状:(5780100,2))。输出是第一列值中的单个值,表示发生断点的特定日期(形状:(5780,1,1))。断点是时间相关值之一,即第二列

更好的输入图片是:

array([[  0.        ,   1.        ],
       [  2.        ,   1.14469799],
       [  4.        ,   1.35245666],
       ...,
       [ 96.        ,   1.80030942],
       [ 98.        ,   1.79964733],
       [100.        ,   1.9898739]])
其中,日期在第一列,相应的测量点在第二列

输出只是一个值,表示断点出现的日期:

array([[1108.]])
问题在于,经过培训后,所有不同测试数据的输出几乎完全相同,即所有不同材料的断裂点在同一天(小数点处的变化可以忽略不计)。我曾尝试过高学习率和低学习率(从1e-2到1e-5)、培训次数(300到3000)。我还改变了层的数量和每层的神经元

我没有做的是批量标准化或任何形式的标准化,但我用具有相同梯度的相同数据做了一些操作,效果非常好

我在这里使用的架构如下所示:

nodes = 128
drp = 0.01

# Defining input layers and shapes
input_train = Input(shape = (complete_inputs.shape[1], complete_inputs.shape[2]))
output_train = Input(shape= (kp_targets.shape[1], kp_targets.shape[2]))

# Masking layer
masking_layer = Masking(mask_value=0, input_shape = input_train.shape)(input_train)

# Encoder layer. For simple S2S model, we only need the last state_h and the last state_c.
enc_first_layer = Bidirectional(LSTM(nodes, dropout=drp, return_sequences=True, return_state=True))(masking_layer)
enc_first_layer, enc_fwd_h1, enc_fwd_c1, enc_back_h1, enc_back_c1 = Bidirectional(LSTM(nodes, dropout=drp, return_sequences=True, return_state=True))(enc_first_layer)
enc_stack_h, enc_fwd_h2, enc_fwd_c2, enc_back_h2, enc_back_c2 = Bidirectional(LSTM(nodes, dropout=drp, return_sequences=True, return_state=True))(enc_first_layer)

enc_last_h1 = concatenate([enc_fwd_h1, enc_back_h1])
enc_last_h2 = concatenate([enc_fwd_h2, enc_back_h2])
enc_last_c1 = concatenate([enc_fwd_c1, enc_back_c1])
enc_last_c2 = concatenate([enc_fwd_c2, enc_back_c2])


# RepeatVector layer (using only the last hidden state of encoder)
rv = RepeatVector(output_train.shape[1])(enc_last_h2)

# Stacked decoder layer for alignment score calculation (using the last hidden state of encoder)
dec_stack_h = Bidirectional(LSTM(nodes, dropout=drp, return_state=False, return_sequences=True))(rv, initial_state=[enc_fwd_h1, enc_fwd_c1, enc_back_h1, enc_back_c1])
dec_stack_h = Bidirectional(LSTM(nodes, dropout=drp, return_state=False, return_sequences=True))(dec_stack_h)
dec_stack_h = Bidirectional(LSTM(nodes, dropout=drp, return_state=False, return_sequences=True))(dec_stack_h, initial_state=[enc_fwd_h2, enc_fwd_c2, enc_back_h2, enc_back_c2])


# Attention layer (uses STACKED encoder output and dots it with stacked decoder output)
attention_ = dot([dec_stack_h, enc_stack_h], axes=[2,2])
attention_ = Activation('softmax')(attention_)

# Calculating the context vector
context = dot([attention_, enc_stack_h], axes=[2,1])

# Concat the context vector and stacked hidden states of decoder, and use it as input to the last dense layer
dec_combined_context = concatenate([context, dec_stack_h])


# Output Timedistributed dense layers
out = TimeDistributed(Dense(nodes/2, activation='relu'))(dec_combined_context)
out = TimeDistributed(Dense(output_train.shape[2], activation='linear'))(dec_combined_context)

# Compile model
model_attn = Model(inputs=input_train, outputs=out)
opt = optimizers.Adam(learning_rate=0.004)
model_attn.compile(optimizer=opt, loss=masked_mae)
这里会出什么问题

为了对这个问题有一个更广泛的看法,我还想到了以下问题:这个模型是不是有点过头了?有没有另一种机器/深度学习模型更适合用我掌握的数据预测这种输出

我已经解决这个问题一周了,没有任何改善,所以任何帮助都将不胜感激

编辑1:尝试使用StandardScaler和更简单的体系结构进行规范化。到目前为止没有任何改进。以下是以所有可能的组合实施注释掉的部分的结构。

nodes = 130 # Tried with 10/30/40/80

model_attn = Sequential()
#model_attn.add(Masking(mask_value=0, input_shape = (complete_inputs.shape[1], complete_inputs.shape[2])))

#model_attn.add(Bidirectional(LSTM(nodes, dropout=0.1, return_sequences=True)))
#model_attn.add(Bidirectional(LSTM(nodes, dropout=0.1, return_sequences=True)))
model_attn.add(Bidirectional(LSTM(nodes, dropout=0.1, return_sequences=False)))

model_attn.add(Dense(1))
model_attn.compile(optimizer=optimizers.Adam(0.001), loss = 'MAE')

损失不会随时间减少:

model_attn.fit(complete_inputs, kp_targets, batch_size=350, epochs=300, shuffle=True, validation_split=0.1, callbacks=[callback])

Epoch 1/300
11/11 [==============================] - 18s 2s/step - loss: 0.7930 - val_loss: 0.3486
Epoch 2/300
11/11 [==============================] - 16s 1s/step - loss: 0.7544 - val_loss: 0.5152
Epoch 3/300
11/11 [==============================] - 16s 1s/step - loss: 0.7406 - val_loss: 0.4794
Epoch 4/300
11/11 [==============================] - 16s 1s/step - loss: 0.7385 - val_loss: 0.5361
Epoch 5/300
11/11 [==============================] - 16s 1s/step - loss: 0.7367 - val_loss: 0.4821
Epoch 6/300
11/11 [==============================] - 16s 1s/step - loss: 0.7350 - val_loss: 0.5518
Epoch 7/300
11/11 [==============================] - 18s 2s/step - loss: 0.7344 - val_loss: 0.5151
Epoch 8/300
11/11 [==============================] - 17s 2s/step - loss: 0.7339 - val_loss: 0.5646
Epoch 9/300
11/11 [==============================] - 16s 1s/step - loss: 0.7380 - val_loss: 0.5277
Epoch 10/300
11/11 [==============================] - 16s 1s/step - loss: 0.7382 - val_loss: 0.4879
Epoch 11/300
11/11 [==============================] - 16s 1s/step - loss: 0.7367 - val_loss: 0.5367
Epoch 12/300
11/11 [==============================] - 16s 1s/step - loss: 0.7382 - val_loss: 0.4910
Epoch 13/300
11/11 [==============================] - 16s 1s/step - loss: 0.7354 - val_loss: 0.5244
Epoch 14/300
11/11 [==============================] - 16s 1s/step - loss: 0.7386 - val_loss: 0.5043
Epoch 15/300
11/11 [==============================] - 16s 1s/step - loss: 0.7329 - val_loss: 0.5421
Epoch 16/300
11/11 [==============================] - 16s 1s/step - loss: 0.7376 - val_loss: 0.5023
Epoch 17/300
11/11 [==============================] - 16s 1s/step - loss: 0.7346 - val_loss: 0.4539
.....
.....

Epoch 27/300
11/11 [==============================] - 15s 1s/step - loss: 0.7388 - val_loss: 0.5649
Epoch 28/300
11/11 [==============================] - 16s 1s/step - loss: 0.7329 - val_loss: 0.6575
Epoch 29/300
11/11 [==============================] - 16s 1s/step - loss: 0.7400 - val_loss: 0.5123
Epoch 30/300
11/11 [==============================] - 16s 1s/step - loss: 0.7336 - val_loss: 0.4965
Epoch 31/300
11/11 [==============================] - 16s 1s/step - loss: 0.7328 - val_loss: 0.5069
Epoch 32/300
11/11 [==============================] - 17s 2s/step - loss: 0.7320 - val_loss: 0.5274
Epoch 33/300
11/11 [==============================] - 17s 2s/step - loss: 0.7302 - val_loss: 0.5968
Epoch 34/300
11/11 [==============================] - 16s 1s/step - loss: 0.7354 - val_loss: 0.6161
....
....
....
Epoch 184/300
11/11 [==============================] - 16s 1s/step - loss: 0.7088 - val_loss: 0.8242
Epoch 185/300
11/11 [==============================] - 16s 1s/step - loss: 0.7034 - val_loss: 0.7799
Epoch 186/300
11/11 [==============================] - 16s 1s/step - loss: 0.7098 - val_loss: 0.8179
Epoch 187/300
11/11 [==============================] - 16s 1s/step - loss: 0.7066 - val_loss: 0.7854
Epoch 188/300
11/11 [==============================] - 16s 1s/step - loss: 0.7142 - val_loss: 0.8340
Epoch 189/300
11/11 [==============================] - 16s 1s/step - loss: 0.7123 - val_loss: 0.7197
两种损失没有特定的增加或减少顺序。我也在损失最小的时期停止了训练,但没有任何改善

更新1:应用StandardScalar时出错。修复后,它似乎确实为测试数据集输出了不同的预测

更新2:对于这些类型的预测,CNN也是一个不错的选择。然而,这两种体系结构之间的比较仍然需要进行。将在这里更新我的发现


更新3:对于这些类型的预测,CNN是比LSTM更好的选择,因为数据涉及更多的分类问题。虽然有了更多的LSTM层和超参数的调整可以工作,但我的实验表明,对于类似的结果,CNN的执行速度至少是LSTM的12倍,而且可能内存使用率更低。

如果输出都在指定范围内(比如500-1000),我会首先制作一个分类器来预测哪一天。这是为了验证数据是否确实可以训练,并且与MAE相比,CCE更容易收敛。您的输出都是整数,这意味着您没有一天中的一小部分数据。这很像回顾分数预测,你需要预测你可以从一篇文章中得到多少星星。例如,如果我的数据的输出包含平均分布在1-1000之间的天数,并且模型不收敛。我将尝试使它成为一个0-100、101-200等10类的分类器,并测量精度。这将使模型需要预测的内容从无限减少到10。并且输出值被限制为0-1,这更容易处理。请注意,没有关于编辑帖子的通知。只有新评论。@HarryS,[抱歉..]我建议尝试使用和不使用批处理规范化。最后,尽管Adam是一个非常受欢迎的优化器,但它有时可能会不稳定(val_损失上下波动),在我的例子中,切换到SGD会导致更慢、更平滑的学习曲线。