Python 二进制标签的Tensorflow RNN多对多时间序列

Python 二进制标签的Tensorflow RNN多对多时间序列,python,tensorflow,rnn,Python,Tensorflow,Rnn,我正在尝试训练一个基本的递归神经网络(多对多)。输入数据只有一个特征(sin函数),标签是二进制的:(1和2) 我可以使用MSE损耗函数对其进行训练,但在尝试将其替换为xentropy时遇到了一些问题 以下是我迄今为止对非离散标签有效的代码: import matplotlib.pyplot as plt import tensorflow as tf import numpy as np n_steps = 20 n_inputs = 1 n_neurons = 100 n_outputs

我正在尝试训练一个基本的递归神经网络(多对多)。输入数据只有一个特征(sin函数),标签是二进制的:(1和2)

我可以使用MSE损耗函数对其进行训练,但在尝试将其替换为xentropy时遇到了一些问题

以下是我迄今为止对非离散标签有效的代码:

import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np

n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
learning_rate = 0.001

# Create data 
fs = 1000 # sample rate 
f = 2 # the frequency of the signal
x = np.arange(fs) # the points on the x axis for plotting

# training features
dfX = np.array([ np.sin(2*np.pi*f * (i/fs)) for i in x]) 

#labels
dfX2 = np.array([ np.sin(2*np.pi*f * (i/fs)) for i in x]) 
dfX2[dfX2 < 0] = 1
dfX2[dfX2 > 0] = 2

# RNN

X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])

cell = tf.contrib.rnn.OutputProjectionWrapper( tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),output_size=n_outputs)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)


loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()

n_iterations = 1000
batch_size = 200
n_epouchs=10

with tf.Session() as sess:
  init.run()
  for epouch in range(n_epouchs):
    for iteration in range(n_iterations-200):
      X_batch= dfX[iteration:iteration+batch_size]
      X_batch= X_batch.reshape(-1,n_steps,n_inputs)
      y_batch= dfX2[(iteration+1):(iteration+batch_size+1)]
      y_batch= y_batch.reshape(-1,n_steps,n_outputs)
      sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
      if iteration % 100 == 0:
    mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
    print(iteration,"--",epouch, "\tMSE:", mse)
  X_new1= dfX[37:37+batch_size]
  X_new1= X_new1.reshape(-1,n_steps,n_inputs)
  y_pred1 = sess.run(outputs, feed_dict={X: X_new1})

但它不起作用

代码必须进行以下更改才能用于
交叉熵损失

 xentropy =  tf.nn.softmax_cross_entropy_with_logits_v2(
               labels=tf.one_hot(tf.cast(y, tf.int32),2), logits=outputs)
1:一个热标签:它们应为0或1。因此,请将代码更改为:

dfX2[dfX2 < 0] = 0
dfX2[dfX2 > 0] = 1
3:由于输入未进行
one hot
编码,因此需要将它们转换为
one hot
,以获得
交叉熵损失

 xentropy =  tf.nn.softmax_cross_entropy_with_logits_v2(
               labels=tf.one_hot(tf.cast(y, tf.int32),2), logits=outputs)
进行上述更改将获得类似的MSE分数:

0 -- 0  MSE: 0.60017127
100 -- 0    MSE: 0.13623504
200 -- 0    MSE: 0.07625882
300 -- 0    MSE: 0.006987947

这就产生了这个错误:InvalidArgumentError(回溯请参见上文):收到的标签值为2,超出了[0,1]的有效范围。请告诉我我的答案是否解决了您的问题,谢谢。我并不抱歉……我已经解决了几个问题,并测试了代码。
0 -- 0  MSE: 0.60017127
100 -- 0    MSE: 0.13623504
200 -- 0    MSE: 0.07625882
300 -- 0    MSE: 0.006987947