Tensorflow中LSTM变量的重用

Tensorflow中LSTM变量的重用,tensorflow,recurrent-neural-network,lstm,Tensorflow,Recurrent Neural Network,Lstm,我正在尝试使用LSTM制作RNN。 我制作了LSTM模型,之后是两个DNN网络和一个回归输出层 我对数据进行了训练,最终训练损失约为0.009。 然而,当我将该模型应用于测试数据时,损失约为0.5 第1个历元训练损失约为0.5。 因此,我认为在测试模型中不使用经过训练的变量。 训练模型和测试模型之间的唯一区别是批量大小。 培训批次=100~200,测试批次大小=1 在main函数中,我创建了LSTM实例。 在LSTMinnitializer中制作模型 def __init__(self,conf

我正在尝试使用
LSTM
制作
RNN
。 我制作了
LSTM
模型,之后是两个
DNN
网络和一个回归输出层

我对数据进行了训练,最终训练损失约为
0.009
。 然而,当我将该模型应用于测试数据时,损失约为
0.5

第1个历元训练损失约为
0.5
因此,我认为在测试模型中不使用经过训练的变量。

训练模型和测试模型之间的唯一区别是批量大小。
培训批次=100~200
测试批次大小=1

在main函数中,我创建了
LSTM
实例。 在
LSTM
innitializer中制作模型

def __init__(self,config,train_model=None):
    self.sess = sess = tf.Session()

    self.num_steps = num_steps = config.num_steps
    self.lstm_size = lstm_size = config.lstm_size
    self.num_features = num_features = config.num_features
    self.num_layers = num_layers = config.num_layers
    self.num_hiddens = num_hiddens = config.num_hiddens
    self.batch_size = batch_size = config.batch_size
    self.train = train = config.train
    self.epoch = config.epoch
    self.learning_rate = learning_rate = config.learning_rate

    with tf.variable_scope('model') as scope:        
        self.lstm_cell = lstm_cell = tf.nn.rnn_cell.LSTMCell(lstm_size,initializer = tf.contrib.layers.xavier_initializer(uniform=False))
        self.cell = cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * num_layers)

    with tf.name_scope('placeholders'):
        self.x = tf.placeholder(tf.float32,[self.batch_size,num_steps,num_features],
                                name='input-x')
        self.y = tf.placeholder(tf.float32, [self.batch_size,num_features],name='input-y')
        self.init_state = cell.zero_state(self.batch_size,tf.float32)
    with tf.variable_scope('model'):
        self.W1 = tf.Variable(tf.truncated_normal([lstm_size*num_steps,num_hiddens],stddev=0.1),name='W1')
        self.b1 = tf.Variable(tf.truncated_normal([num_hiddens],stddev=0.1),name='b1')
        self.W2 = tf.Variable(tf.truncated_normal([num_hiddens,num_hiddens],stddev=0.1),name='W2')
        self.b2 = tf.Variable(tf.truncated_normal([num_hiddens],stddev=0.1),name='b2')
        self.W3 = tf.Variable(tf.truncated_normal([num_hiddens,num_features],stddev=0.1),name='W3')
        self.b3 = tf.Variable(tf.truncated_normal([num_features],stddev=0.1),name='b3')


    self.output, self.loss = self.inference()
    tf.initialize_all_variables().run(session=sess)                
    tf.initialize_variables([self.b2]).run(session=sess)

    if train_model == None:
        self.train_step = tf.train.GradientDescentOptimizer(self.learning_rate).minimize(self.loss)
使用上面的LSTMinit,创建下面的LSTM实例

with tf.variable_scope("model",reuse=None):
    train_model = LSTM(main_config)
with tf.variable_scope("model", reuse=True):
    predict_model = LSTM(predict_config)
在制作了两个
LSTM
实例之后,我训练了
train\u模型
。 我在
predict\u model
中输入测试集


为什么不重用变量?

问题在于,如果要重用
范围,则应该使用
tf.get_variable()
来创建变量,而不是
tf.variable()

看看共享变量,您会更好地理解它

此外,您不需要在这里使用会话,因为在定义模型时不必初始化变量,变量应该在即将训练模型时初始化

重用变量的代码如下所示:

def __init__(self,config,train_model=None):
    self.num_steps = num_steps = config.num_steps
    self.lstm_size = lstm_size = config.lstm_size
    self.num_features = num_features = config.num_features
    self.num_layers = num_layers = config.num_layers
    self.num_hiddens = num_hiddens = config.num_hiddens
    self.batch_size = batch_size = config.batch_size
    self.train = train = config.train
    self.epoch = config.epoch
    self.learning_rate = learning_rate = config.learning_rate

    with tf.variable_scope('model') as scope:        
        self.lstm_cell = lstm_cell = tf.nn.rnn_cell.LSTMCell(lstm_size,initializer = tf.contrib.layers.xavier_initializer(uniform=False))
        self.cell = cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * num_layers)

    with tf.name_scope('placeholders'):
        self.x = tf.placeholder(tf.float32,[self.batch_size,num_steps,num_features],
                                name='input-x')
        self.y = tf.placeholder(tf.float32, [self.batch_size,num_features],name='input-y')
        self.init_state = cell.zero_state(self.batch_size,tf.float32)
    with tf.variable_scope('model'):
        self.W1 = tf.get_variable(initializer=tf.truncated_normal([lstm_size*num_steps,num_hiddens],stddev=0.1),name='W1')
        self.b1 = tf.get_variable(initializer=tf.truncated_normal([num_hiddens],stddev=0.1),name='b1')
        self.W2 = tf.get_variable(initializer=tf.truncated_normal([num_hiddens,num_hiddens],stddev=0.1),name='W2')
        self.b2 = tf.get_variable(initializer=tf.truncated_normal([num_hiddens],stddev=0.1),name='b2')
        self.W3 = tf.get_variable(initializer=tf.truncated_normal([num_hiddens,num_features],stddev=0.1),name='W3')
        self.b3 = tf.get_variable(initializer=tf.truncated_normal([num_features],stddev=0.1),name='b3')


    self.output, self.loss = self.inference()

    if train_model == None:
        self.train_step = tf.train.GradientDescentOptimizer(self.learning_rate).minimize(self.loss)
要查看在创建
train\u model
predict\u model
后创建的变量,请使用以下代码:

for v in tf.all_variables():
    print(v.name)