Machine learning 千层面中的LSTM可能存在问题

Machine learning 千层面中的LSTM可能存在问题,machine-learning,lstm,lasagne,Machine Learning,Lstm,Lasagne,使用本教程中给出的LSTM的简单构造函数和维度[,,1]的输入,可以看到输出的形状[,,num_units]。 但不管构造过程中传递的num_单位是多少,输出都与输入具有相同的形状 以下是复制此问题的最小代码 import lasagne import theano import theano.tensor as T import numpy as np num_batches= 20 sequence_length= 100 data_

使用本教程中给出的LSTM的简单构造函数和维度[,,1]的输入,可以看到输出的形状[,,num_units]。 但不管构造过程中传递的num_单位是多少,输出都与输入具有相同的形状

以下是复制此问题的最小代码

    import lasagne
    import theano
    import theano.tensor as T
    import numpy as np

    num_batches= 20
    sequence_length= 100
    data_dim= 1
    train_data_3= np.random.rand(num_batches,sequence_length,data_dim).astype(theano.config.floatX)

    #As in the tutorial
    forget_gate = lasagne.layers.Gate(b=lasagne.init.Constant(5.0))
    l_lstm = lasagne.layers.LSTMLayer(
                                     (num_batches,sequence_length, data_dim), 
                                     num_units=8,
                                     forgetgate=forget_gate
                                     )

    lstm_in= T.tensor3(name='x', dtype=theano.config.floatX)

    lstm_out = lasagne.layers.get_output(l_lstm, {l_lstm:lstm_in})
    f = theano.function([lstm_in], lstm_out)
    lstm_output_np= f(train_data_3)

    lstm_output_np.shape
    #= (20, 100, 1)
一个不合格的LSTM(我的意思是在默认模式下)应该为每个单元生成一个输出,对吗? 代码是在凯辛的cuda千层面docker图像上运行的 有什么好处?
谢谢

您可以使用lasagne.layers.InputLayer来解决这个问题

import lasagne
import theano
import theano.tensor as T
import numpy as np

num_batches= 20
sequence_length= 100 
data_dim= 1
train_data_3= np.random.rand(num_batches,sequence_length,data_dim).astype(theano.config.floatX)

#As in the tutorial
forget_gate = lasagne.layers.Gate(b=lasagne.init.Constant(5.0))
input_layer = lasagne.layers.InputLayer(shape=(num_batches, # <-- change
              sequence_length, data_dim),)  # <-- change
l_lstm = lasagne.layers.LSTMLayer(input_layer,  # <-- change
                                 num_units=8,
                                 forgetgate=forget_gate
                                 )

lstm_in= T.tensor3(name='x', dtype=theano.config.floatX)

lstm_out = lasagne.layers.get_output(l_lstm, lstm_in)  # <-- change
f = theano.function([lstm_in], lstm_out)
lstm_output_np= f(train_data_3)

print lstm_output_np.shape
进口千层面
进口茶
将无张量导入为T
将numpy作为np导入
批次数=20
序列长度=100
数据尺寸=1
train\u data\u 3=np.random.rand(批次数、序列长度、数据尺寸).astype(theano.config.floatX)
#就像在教程中一样
忘记_gate=lasagne.layers.gate(b=lasagne.init.Constant(5.0))

input_layer=lasagne.layers.InputLayer(shape=(num_batches,#您可以通过使用lasagne.layers.InputLayer来解决这个问题

import lasagne
import theano
import theano.tensor as T
import numpy as np

num_batches= 20
sequence_length= 100 
data_dim= 1
train_data_3= np.random.rand(num_batches,sequence_length,data_dim).astype(theano.config.floatX)

#As in the tutorial
forget_gate = lasagne.layers.Gate(b=lasagne.init.Constant(5.0))
input_layer = lasagne.layers.InputLayer(shape=(num_batches, # <-- change
              sequence_length, data_dim),)  # <-- change
l_lstm = lasagne.layers.LSTMLayer(input_layer,  # <-- change
                                 num_units=8,
                                 forgetgate=forget_gate
                                 )

lstm_in= T.tensor3(name='x', dtype=theano.config.floatX)

lstm_out = lasagne.layers.get_output(l_lstm, lstm_in)  # <-- change
f = theano.function([lstm_in], lstm_out)
lstm_output_np= f(train_data_3)

print lstm_output_np.shape
进口千层面
进口茶
将无张量导入为T
将numpy作为np导入
批次数=20
序列长度=100
数据尺寸=1
train\u data\u 3=np.random.rand(批次数、序列长度、数据尺寸).astype(theano.config.floatX)
#就像在教程中一样
忘记_gate=lasagne.layers.gate(b=lasagne.init.Constant(5.0))
input\u layer=lasagne.layers.InputLayer(形状=(数量)#