Python 当输入向量中有一些零时,运行biLstm模型将得到一个';浮动错误8';
我自己做填充过程,所以在“辩论”、“理由”、“索赔”或“保证”中有0个值。放入BiLSTM体系结构将获得“浮动错误8”,无需任何其他提醒 此错误表示某些数字被0除或索引超出范围 但在模型中,不应该有任何数字除以0 代码如下:Python 当输入向量中有一些零时,运行biLstm模型将得到一个';浮动错误8';,python,tensorflow,deep-learning,lstm,Python,Tensorflow,Deep Learning,Lstm,我自己做填充过程,所以在“辩论”、“理由”、“索赔”或“保证”中有0个值。放入BiLSTM体系结构将获得“浮动错误8”,无需任何其他提醒 此错误表示某些数字被0除或索引超出范围 但在模型中,不应该有任何数字除以0 代码如下: debate = tf.placeholder(tf.float32,[None,48,300]) reason = tf.placeholder(tf.float32,[None,48,300]) claim = tf.placeholder(tf.float32,[No
debate = tf.placeholder(tf.float32,[None,48,300])
reason = tf.placeholder(tf.float32,[None,48,300])
claim = tf.placeholder(tf.float32,[None,48,300])
warrant = tf.placeholder(tfenter code here.float32,[None,48,300])
y = tf.placeholder(tf.float32,[None,2])
n_hidden = 300
w = weight_variable([n_hidden,2])
b = bias_variable([2])
def bilstm(x, weights, biases):
lstm_f = tf.contrib.rnn.LSTMCell(n_hidden, forget_bias = 1.0)
lstm_b = tf.contrib.rnn.LSTMCell(n_hidden, forget_bias = 1.0)
(alloutputs, output_states) = tf.nn.bidirectional_dynamic_rnn(lstm_f, lstm_b, x, dtype = tf.float32)
(outputs, state) = output_states
(output_state_fw, output_state_bw) = state
return tf.matmul(tf.add(output_state_fw, output_state_bw), weights) + biases
final_representation = tf.concat([debate,reason,claim,warrant],1)
prediction = bilstm(final_representation,w,b)
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
当我将相同的输入输入到正常的RNN架构中时,它就可以工作了。
下面是可以运行的RNN代码
def RNN(X,weights,biases):
lstm_cell = tf.nn.rnn_cell.LSTMCell(n_hidden,use_peepholes=True)
#final_state[0] cell state
#final_state[1] hidden_state
outputs,final_state = tf.nn.dynamic_rnn(lstm_cell,X,dtype=tf.float32)
results = tf.nn.softmax(tf.matmul(final_state[1],weights)+biases)
return results
谁能告诉我发生了什么事?我是否误解了biLSTM模型?biLSTM中不应该有任何数字被0除
提前谢谢