Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/327.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras序列:使用三个参数指定输入形状_Python_Keras - Fatal编程技术网

Python Keras序列:使用三个参数指定输入形状

Python Keras序列:使用三个参数指定输入形状,python,keras,Python,Keras,我有一个具有以下设置的数据帧: import numpy as np X = np.random.rand(100, 20, 3) 这里有100个时间片、20个观测值和每个观测值的3个属性 我试图找出如何将上述数据传递到以下Keras序列: from keras.models import Sequential, Model from keras.layers import Dense, LSTM, Dropout, Activation import keras # config sta

我有一个具有以下设置的数据帧:

import numpy as np

X = np.random.rand(100, 20, 3)
这里有100个时间片、20个观测值和每个观测值的3个属性

我试图找出如何将上述数据传递到以下Keras序列:

from keras.models import Sequential, Model
from keras.layers import Dense, LSTM, Dropout, Activation
import keras

# config
stateful = False
look_back = 3
lstm_cells = 1024
dropout_rate = 0.5
n_features = int(X.shape[1]*3)
input_shape = (look_back, n_features, 3)
output_shape = n_features

def loss(y_true, y_pred):
  return keras.losses.mean_squared_error(y_true, y_pred)

model = Sequential()
model.add(LSTM(lstm_cells, stateful=stateful, return_sequences=True, input_shape=input_shape))
model.add(Dense(output_shape, activation='relu'))
model.compile(loss=loss, optimizer='sgd')
运行此命令将抛出:

ValueError:输入0与层lstm_23不兼容:应为 ndim=3,发现ndim=4


有人知道我如何重塑
X
以将其传递到模型中吗?任何建议都会有帮助

这似乎让事情进展顺利:

from keras.models import Sequential, Model
from keras.layers import Dense, LSTM, Dropout, Activation
import keras

# config
stateful = False
look_back = 3
lstm_cells = 1024
dropout_rate = 0.5
n_features = int(X.shape[1]) * 3
input_shape = (look_back, n_features)
output_shape = n_features

def loss(y_true, y_pred):
  return keras.losses.mean_squared_error(y_true, y_pred)

model = Sequential()
model.add(LSTM(lstm_cells, stateful=stateful, return_sequences=True, input_shape=input_shape))
model.add(LSTM(lstm_cells, stateful=stateful, return_sequences=True))
model.add(LSTM(lstm_cells, stateful=stateful))
model.add(Dense(output_shape, activation='relu'))
model.compile(loss=loss, optimizer='sgd')
然后可以按如下方式对训练数据进行分区:

# build training data
train_x = []
train_y = []
n_time = int(X.shape[0])
n_obs = int(X.shape[1])
n_attrs = int(X.shape[2])

# note we flatten the last dimension
for i in range(look_back, n_time-1, 1):
  train_x.append( X[i-look_back:i].reshape(look_back, n_obs * n_attrs ) )
  train_y.append( X[i+1].ravel() )

train_x = np.array(train_x)
train_y = np.array(train_y)
然后可以训练玩具模型:

model.fit(train_x, train_y, epochs=10, batch_size=10)

你能解释一下为什么需要三个类似的add(LSTM…)命令吗?@Helen当然——在这个例子中,我只是想从文件Chorrnn中重新实现一个神经网络,它有多层LSTM单元。一般来说,这些多层LSTM单元似乎有助于学习更长的数据序列……啊,那么,这些命令中的每一个都添加了一个额外的层吗?(我只是感到困惑,因为原来的代码只有一个这样的命令,所以我想知道这三个命令是不是都需要替换原来的单个命令,或者它们是否是附加的。)是的,每个LSTM()调用都会添加一个新层。如果您像我一样陷入困境,请尝试读取
return\u sequences
参数,这对于控制通过网络传输的数据的形状非常重要。有关更多信息,请查看此链接: