Python 在Keras中使用具有重现层的时间分布
我想在每个批次的几个不同序列上运行一个LSTM,然后加入最后的输出。以下是我一直在尝试的:Python 在Keras中使用具有重现层的时间分布,python,keras,recurrent-neural-network,Python,Keras,Recurrent Neural Network,我想在每个批次的几个不同序列上运行一个LSTM,然后加入最后的输出。以下是我一直在尝试的: from keras.layers import Dense, Input, LSTM, Embedding, TimeDistributed num_sentences = 4 num_features = 3 num_time_steps = 5 inputs = Input([num_sentences, num_time_steps]) emb_layer = Embedding(10, nu
from keras.layers import Dense, Input, LSTM, Embedding, TimeDistributed
num_sentences = 4
num_features = 3
num_time_steps = 5
inputs = Input([num_sentences, num_time_steps])
emb_layer = Embedding(10, num_features)
embedded = emb_layer(inputs)
lstm_layer = LSTM(4)
shape = [num_sentences, num_time_steps, num_features]
lstm_outputs = TimeDistributed(lstm_layer, input_shape=shape)(embedded)
这给了我以下错误:
Traceback (most recent call last):
File "test.py", line 12, in <module>
lstm_outputs = TimeDistributed(lstm_layer, input_shape=shape)(embedded)
File "/Users/erick/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 546, in __call__
self.build(input_shapes[0])
File "/Users/erick/anaconda2/lib/python2.7/site-packages/keras/layers/wrappers.py", line 94, in build
self.layer.build(child_input_shape)
File "/Users/erick/anaconda2/lib/python2.7/site-packages/keras/layers/recurrent.py", line 702, in build
self.input_dim = input_shape[2]
IndexError: tuple index out of range
我试图在TimeDistributed中省略input_形状参数,但它没有改变任何事情。input_形状需要是LSTM层的参数,而不是作为包装器的TimeDistributed。通过省略它,我觉得一切都很好:
from keras.layers import Dense, Input, LSTM, Embedding, TimeDistributed
num_sentences = 4
num_features = 3
num_time_steps = 5
inputs = Input([num_sentences, num_time_steps])
emb_layer = Embedding(10, num_features)
embedded = emb_layer(inputs)
lstm_layer = LSTM(4)
shape = [num_sentences, num_time_steps, num_features]
lstm_outputs = TimeDistributed(lstm_layer)(embedded)
#OUTPUT:
Using TensorFlow backend.
[Finished in 1.5s]
在尝试了米切托努的答案并犯了同样的错误后,我意识到我的keras版本可能已经过时了。事实上,它运行的是keras 1.2,代码在2.0上运行良好