python keras-预测时间序列,基于相似序列的历史样本很少

python keras-预测时间序列,基于相似序列的历史样本很少,python,keras,time-series,Python,Keras,Time Series,我正试图用Keras建立一个模型,根据传感器的类型和同类型传感器的历史数据预测传感器的时间序列 下图显示了由3个相同类型的传感器生成的3个时间序列,绿色虚线是新传感器数据,垂直线是新传感器数据的终点 我试着写一个LSTM网络,它对其他传感器的历史数据进行训练,一次给一个历史数据,但这导致LSTM在预测新的传感器时考虑传感器的最后一天。 所以我猜我走错了路。基于相同类型的其他时间序列的历史,仅使用几个历史样本预测时间序列的选项有哪些 如有任何帮助/参考/视频,将不胜感激 更新: 我想详细说明一下

我正试图用Keras建立一个模型,根据传感器的类型和同类型传感器的历史数据预测传感器的时间序列

下图显示了由3个相同类型的传感器生成的3个时间序列,绿色虚线是新传感器数据,垂直线是新传感器数据的终点

我试着写一个LSTM网络,它对其他传感器的历史数据进行训练,一次给一个历史数据,但这导致LSTM在预测新的传感器时考虑传感器的最后一天。 所以我猜我走错了路。基于相同类型的其他时间序列的历史,仅使用几个历史样本预测时间序列的选项有哪些

如有任何帮助/参考/视频,将不胜感激

更新:
我想详细说明一下,传感器“分数”(如上图所示)是根据一组随时间收集的特征生成的。i、 e:

⨍(事件1、事件2、事件3、自上次事件1起天数)=得分


然后新数据(绿线)以同样的方式收集,但现在我只有前3天

    +----------+----+--------------+--------------+--------------+------------------------+
    |sensor_id |day |event_1_count |event_2_count |event_3_count |days_since_last_event_1 |
    +----------+----+--------------+--------------+--------------+------------------------+
    | 4        |0   | 2            | 1            | 0            | 0                      |
    +----------+----+--------------+--------------+--------------+------------------------+
    | 4        |1   | 0            | 10           | 2            | 1                      |
    +----------+----+--------------+--------------+--------------+------------------------+
    | 4        |2   | 0            | 1            | 0            | 2                      |
---END OF DATA---
因此,显然我需要考虑新特性。我最初的想法是,考虑到历史特征,尝试了解波浪的“形状”,并根据该模型预测新传感器数据的形状


我已经与@David solution分享了这一点,以供评论

根据您的具体设置和所需输出,有不同的方法

版本A 如果您希望有一个LSTM模型,它获取一大块数据并预测下一步,下面是一个自包含的示例

合成数据仅与图中所示的数据稍有相似,但我希望它对说明仍然有用

上面面板中的预测显示了所有时间序列块都已知的情况,并且预测了每个时间序列块的下一步

较低的面板显示了更现实的情况,在这种情况下,所讨论的时间序列的开始是已知的,其余部分是迭代预测的,一次一步。显然,预测误差可能会随着时间累积和增长

版本B 如果时间序列的总长度已知且固定,并且您希望“自动完成”一个不完整的时间序列(图中的绿色虚线),那么同时预测多个值可能更容易、更可靠

然而,因为对于每个时间序列,您只将起始块作为训练数据(并预测其剩余部分),这可能需要更完全的时间序列

尽管如此,由于每个时间序列在训练期间只使用一次(而不是分成许多连续的块),训练速度更快,结果看起来也不错

更新 嗨,Shlomi,谢谢你的更新。如果我理解正确,您可以使用更多的功能,而不是1D时间序列,即nD时间序列。事实上,这已经包含在模型中(带有一个部分未定义的n_features变量,现已更正)。我在版本B中添加了“创建其他虚拟特征”一节,其中虚拟特征是通过拆分原始1D时间序列创建的(但同时保留原始数据,对应于您的f(…)=分数,这听起来像是一个有用的工程特征)。然后,我只在LSTM网络设置功能中添加了
n\u features=x\u train.shape[2]
。只需确保您的个人功能已正确缩放(例如[0-1]),然后再将其送入网络即可。当然,预测质量在很大程度上取决于实际数据

辅助功能
您是否考虑过查看其他时间序列的
平均值
或“中位数”,而不是单独查看每个时间序列?当你假设他们的行为“相似”时,这可能更容易预测,是不是
green\u nextday=function(green\u tillnow,red,blue,yellow)
?或者,
green\u nextday=function1(现在为绿色);green\u tillnow=function2(红色、蓝色、黄色)
?我的一般建议是查看时间序列的协整(多重积分)()。基本上,如果你能找出绿线与随机趋势的其他三种颜色之间的弱线性关系,那么你就可以预测它。谢谢大家的评论,这里的平均值没有用,因为在实际数据中,方差比我绘制的要大,而且一些传感器数据可能是平坦的,这取决于它们的历史。我只有预测so
green\u nextday=function1(green\u tillnow)时的绿色数据
有没有办法在这些LSTM层之间添加一个注意层?@David你能看看这里吗:?
    +----------+----+--------------+--------------+--------------+------------------------+
    |sensor_id |day |event_1_count |event_2_count |event_3_count |days_since_last_event_1 |
    +----------+----+--------------+--------------+--------------+------------------------+
    | 4        |0   | 2            | 1            | 0            | 0                      |
    +----------+----+--------------+--------------+--------------+------------------------+
    | 4        |1   | 0            | 10           | 2            | 1                      |
    +----------+----+--------------+--------------+--------------+------------------------+
    | 4        |2   | 0            | 1            | 0            | 2                      |
---END OF DATA---
# import modules
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import keras
import keras.models
import keras.layers
import sklearn
import sklearn.metrics

# please load auxiliary functions defined below!
# (omitted here for better readability)

# set seed
np.random.seed(42)

# number of time series
n_samples = 5

# number of steps used for prediction
n_steps = 50

# number of epochs for LSTM training
epochs = 100

# create synthetic data
# (see bottom left panel below, very roughly resembling your data)
tab = create_data(n_samples)

# train model without first column
x_train, y_train = prepare_data(tab.iloc[:, 1:], n_steps=n_steps)
model, history = train_model(x_train, y_train, n_steps=n_steps, epochs=epochs)

# predict first column for testing
# (all chunks are known and only on time step is predicted for each)
veo = tab[0].copy().values
y_test, y_pred = predict_all(veo, model)

# predict iteratively
# (first chunk is known and new values are predicted iteratively)
vec = veo.copy()
y_iter = predict_iterative(vec, n_steps, model)

# plot results
plot_single(y_test, [y_pred, y_iter], n_steps)
# please load auxiliary functions defined below!
# (omitted here for better readability)

# number of time series
n_samples = 10

# create synthetic data
# (see bottom left panel below, very roughly resembling your data)
tab = create_data(n_samples)

# prepare training data
x_train = tab.iloc[:n_steps, 1:].values.T
x_train = x_train.reshape(*x_train.shape, 1)
y_train = tab.iloc[n_steps:, 1:].values.T
print(x_train.shape)  # (9, 50, 1) = old shape, 1D time series

# create additional dummy features to demonstrate usage of nD time series input data
# (feature_i = factor_i * score_i, with sum_i factor_i = 1)
feature_factors = [0.3, 0.2, 0.5]
x_train = np.dstack([x_train] + [factor*x_train for factor in feature_factors])
print(x_train.shape)  # (9, 50, 4) = new shape, original 1 + 3 new features

# create LSTM which predicts everything beyond n_steps
n_steps_out = len(tab) - n_steps
model, history = train_model(x_train, y_train, n_steps=n_steps, epochs=epochs,
                             n_steps_out=n_steps_out)

# prepare test data
x_test = tab.iloc[:n_steps, :1].values.T
x_test = x_test.reshape(*x_test.shape, 1)
x_test = np.dstack([x_test] + [factor*x_test for factor in feature_factors])
y_test = tab.iloc[n_steps:, :1].values.T[0]
y_pred = model.predict(x_test)[0]

# plot results
plot_multi(history, tab, y_pred, n_steps)
def create_data(n_samples):
    # window width for rolling average
    window = 10
    # position of change in trend
    thres = 200
    # time period of interest
    dates = pd.date_range(start='2020-02-16', end='2020-03-15', freq='H')
    # create data frame
    tab = pd.DataFrame(index=dates)
    lend = len(tab)
    lin = np.arange(lend)
    # create synthetic time series
    for ids in range(n_samples):
        trend = 4 * lin - 3 * (lin-thres) * (lin > thres)
        # scale to [0, 1] interval (approximately) for easier handling by network
        trend = 0.9 * trend / max(trend)
        noise = 0.1 * (0.1 + trend) * np.random.randn(lend)
        vec = trend + noise
        tab[ids] = vec
    # compute rolling average to get smoother variation
    tab = tab.rolling(window=window).mean().iloc[window:]
    return tab


def split_sequence(vec, n_steps=20):
    # split sequence into chunks of given size
    x_trues, y_trues = [], []
    steps = len(vec) - n_steps
    for step in range(steps):
        ilo = step
        iup = step + n_steps
        x_true, y_true = vec[ilo:iup], vec[iup]
        x_trues.append(x_true)
        y_trues.append(y_true)
    x_true = np.array(x_trues)
    y_true = np.array(y_trues)
    return x_true, y_true


def prepare_data(tab, n_steps=20):
    # convert data frame with multiple columns into chucks
    x_trues, y_trues = [], []
    if tab.ndim == 2:
        arr = np.atleast_2d(tab).T
    else:
        arr = np.atleast_2d(tab)
    for col in arr:
        x_true, y_true = split_sequence(col, n_steps=n_steps)
        x_trues.append(x_true)
        y_trues.append(y_true)
    x_true = np.vstack(x_trues)
    x_true = x_true.reshape(*x_true.shape, 1)
    y_true = np.hstack(y_trues)
    return x_true, y_true


def train_model(x_train, y_train, n_units=50, n_steps=20, epochs=200,
                n_steps_out=1):
    # get number of features from input data
    n_features = x_train.shape[2]
    # setup network
    # (feel free to use other combination of layers and parameters here)
    model = keras.models.Sequential()
    model.add(keras.layers.LSTM(n_units, activation='relu',
                                return_sequences=True,
                                input_shape=(n_steps, n_features)))
    model.add(keras.layers.LSTM(n_units, activation='relu'))
    model.add(keras.layers.Dense(n_steps_out))
    model.compile(optimizer='adam', loss='mse', metrics=['mse'])
    # train network
    history = model.fit(x_train, y_train, epochs=epochs,
                        validation_split=0.1, verbose=1)
    return model, history


def predict_all(vec, model):
    # split data
    x_test, y_test = prepare_data(vec, n_steps=n_steps)
    # use trained model to predict all data points from preceeding chunk
    y_pred = model.predict(x_test, verbose=1)
    y_pred = np.hstack(y_pred)
    return y_test, y_pred


def predict_iterative(vec, n_steps, model):
    # use last chunk to predict next value, iterate until end is reached
    y_iter = vec.copy()
    lent = len(y_iter)
    steps = lent - n_steps - 1
    for step in range(steps):
        print(step, steps)
        ilo = step
        iup = step + n_steps + 1
        x_test, y_test = prepare_data(y_iter[ilo:iup], n_steps=n_steps)
        y_pred = model.predict(x_test, verbose=0)
        y_iter[iup] = y_pred
    return y_iter[n_steps:]


def plot_single(y_test, y_plots, n_steps, nrows=2):
    # prepare variables for plotting
    metric = 'mse'
    mima = [min(y_test), max(y_test)]
    titles = ['all', 'iterative']
    lin = np.arange(-n_steps, len(y_test))
    # create figure
    fig, axis = plt.subplots(figsize=(16, 9),
                             nrows=2, ncols=3)
    # plot time series
    axia = axis[1, 0]
    axia.set_title('original data')
    tab.plot(ax=axia)
    axia.set_xlabel('time')
    axia.set_ylabel('value')
    # plot network training history
    axia = axis[0, 0]
    axia.set_title('training history')
    axia.plot(history.history[metric], label='train')
    axia.plot(history.history['val_'+metric], label='test')
    axia.set_xlabel('epoch')
    axia.set_ylabel(metric)
    axia.set_yscale('log')
    plt.legend()
    # plot result for "all" and "iterative" prediction
    for idy, y_plot in enumerate(y_plots):
        # plot true/predicted time series
        axia = axis[idy, 1]
        axia.set_title(titles[idy])
        axia.plot(lin, veo, label='full')
        axia.plot(y_test, label='true')
        axia.plot(y_plot, label='predicted')
        plt.legend()
        axia.set_xlabel('time')
        axia.set_ylabel('value')
        axia.set_ylim(0, 1)
        # plot scatter plot of true/predicted data
        axia = axis[idy, 2]
        r2 = sklearn.metrics.r2_score(y_test, y_plot)
        axia.set_title('R2 = %.2f' % r2)
        axia.scatter(y_test, y_plot)
        axia.plot(mima, mima, color='black')
        axia.set_xlabel('true')
        axia.set_ylabel('predicted')
    plt.tight_layout()
    return None


def plot_multi(history, tab, y_pred, n_steps):
    # prepare variables for plotting
    metric = 'mse'
    # create figure
    fig, axis = plt.subplots(figsize=(16, 9),
                             nrows=1, ncols=2, squeeze=False)
    # plot network training history
    axia = axis[0, 0]
    axia.set_title('training history')
    axia.plot(history.history[metric], label='train')
    axia.plot(history.history['val_'+metric], label='test')
    axia.set_xlabel('epoch')
    axia.set_ylabel(metric)
    axia.set_yscale('log')
    plt.legend()
    # plot true/predicted time series
    axia = axis[0, 1]
    axia.plot(tab[0].values, label='true')
    axia.plot(range(n_steps, len(tab)), y_pred, label='predicted')
    plt.legend()
    axia.set_xlabel('time')
    axia.set_ylabel('value')
    axia.set_ylim(0, 1)
    plt.tight_layout()
    return None