Neural network 简单的tensorflow神经网络不增加精度或减少损失?

Neural network 简单的tensorflow神经网络不增加精度或减少损失?,neural-network,tensorflow,Neural Network,Tensorflow,我有以下培训网络 graph = tf.Graph() with graph.as_default(): tf_train_dataset = tf.constant(X_train) tf_train_labels = tf.constant(y_train) tf_valid_dataset = tf.constant(X_test) weights = tf.Variable(tf.truncated_normal([X_train.shape[1],

我有以下培训网络

graph = tf.Graph()
with graph.as_default():

    tf_train_dataset = tf.constant(X_train)
    tf_train_labels = tf.constant(y_train)
    tf_valid_dataset = tf.constant(X_test)

    weights = tf.Variable(tf.truncated_normal([X_train.shape[1], 1]))

    biases = tf.Variable(tf.zeros([num_labels]))
    logits = tf.nn.softmax(tf.matmul(tf_train_dataset, weights) + biases)

    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
    optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
    train_prediction = tf.nn.softmax(logits)
    valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights) + biases)
我运行它如下:

num_steps = 10

with tf.Session(graph=graph) as session: 
    tf.initialize_all_variables().run()
    print('Initialized')
    for step in range(num_steps):
        _, l, predictions = session.run([optimizer, loss, train_prediction])
        print("Loss: ",l)
        print('Training accuracy: %.1f' % sklearn.metrics.accuracy_score(predictions.flatten(), y_train.flatten()))
但结果如下

Initialized
Loss:  0.0
Training accuracy: 0.5
Loss:  0.0
Training accuracy: 0.5

X_列的形状是(213403,25)和y_列的形状是(213403,1),以应对物流。我没有将标签编码为一个hot,因为只有两个类,1或0。我也尝试过二次损失函数,它仍然是一样的,同样的事情发生了,损失函数根本没有减少。我在这里感觉到一个语法错误,但我不知道。

您正在将标签作为单个列传递(没有编码)。 模型无法获取作为因子类型的标签。 因此,它将标签视为连续值

损失:0.0表示损失为零。这意味着你的模型非常适合。 之所以会出现这种情况,是因为您的标签是连续的(回归函数),并且您正在使用softmax_cross_entropy_和_logits损耗函数

尝试传递标签的一个热编码并检查