Python TensorFlow XOR NN评估函数错误

Python TensorFlow XOR NN评估函数错误,python,tensorflow,runtime-error,eval,invalid-argument,Python,Tensorflow,Runtime Error,Eval,Invalid Argument,我正在尝试使用vanilla tensorflow编写一个XOR MLP,但我一直在试图找出如何编写eval函数 我收到错误InvalidArgumentError(回溯见上文):目标[1]超出范围。当注释掉准确性.eval行时,不会产生错误。这是我的密码: import numpy as np import tensorflow as tf n_inputs = 2 n_hidden = 3 n_outputs = 1 def reset_graph(seed=42): tf.re

我正在尝试使用vanilla tensorflow编写一个XOR MLP,但我一直在试图找出如何编写eval函数

我收到错误
InvalidArgumentError(回溯见上文):目标[1]超出范围
。当注释掉
准确性.eval
行时,不会产生错误。这是我的密码:

import numpy as np
import tensorflow as tf

n_inputs = 2
n_hidden = 3
n_outputs = 1

def reset_graph(seed=42):
    tf.reset_default_graph()
    tf.set_random_seed(seed)
    np.random.seed(seed)

reset_graph()

X = tf.placeholder(tf.float32, shape=(None, n_inputs), name='X')
y = tf.placeholder(tf.float32, shape=(None), name='y')

def neuron_layer(X, n_neurons, name, activation=None):
    with tf.name_scope(name):
        n_inputs = int(X.get_shape()[1])
        stddev = 2 / np.sqrt(n_inputs)
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)
        W = tf.Variable(init, name="weights")
        b = tf.Variable(tf.zeros([n_neurons]), name="bias")
        Z = tf.matmul(X, W) + b
        if activation is not None:
            return activation(Z)
        else: return Z

with tf.name_scope('dnn'):
    hidden = neuron_layer(X, n_hidden, name='hidden', activation=tf.nn.sigmoid)
    logits = neuron_layer(hidden, n_outputs, name='outputs')

with tf.name_scope('loss'):
    bin_xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
    loss = tf.reduce_mean(bin_xentropy, name='loss')    

learning_rate = 0.1

with tf.name_scope('train'):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
    training_op = optimizer.minimize(loss)

with tf.name_scope('eval'):    
    correct = tf.nn.in_top_k(logits, tf.cast(y,tf.int32), 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
    accuracy_summary = tf.summary.scalar('accuracy', accuracy)


init = tf.global_variables_initializer()
saver = tf.train.Saver()

n_epochs = 100
batch_size = 4

def shuffle_batch(X, y, batch_size): # not really needed for XOR
    rnd_idx = np.random.permutation(len(X))
    n_batches = len(X) // batch_size
    for batch_idx in np.array_split(rnd_idx, n_batches):
        X_batch, y_batch = X[batch_idx], y[batch_idx]
        yield X_batch, y_batch

X_train = [
    (0, 0),
    (0, 1),
    (1, 0),
    (1, 1)
]
y_train = [0,1,1,0]    

X_train = np.array(X_train)
y_train = np.array(y_train)

with tf.Session() as sess:
    init.run()
    for epoch in range(n_epochs):
        for X_batch, y_batch in shuffle_batch(X_train, y_train, batch_size):
            sess.run(training_op, feed_dict={X: X_batch, y: y_batch})        
        acc = accuracy.eval(feed_dict={X: X_train, y: y_train})
        print(acc)

有人能告诉我这个函数有什么问题吗?我尝试改编机器学习手册中MNIST示例中的XOR。

我不太清楚您想用它实现什么

correct=tf.nn.in\u top\u k(logits,tf.cast(y,tf.int32),1)

我建议使用

correct=tf.equal(
重塑(
tf.更大的_相等(tf.nn.乙状结肠(logits),0.5),[-1]
), 
tf.cast(y,tf.bool)
)

编辑:我注意到在给定的解决方案中,精度一直停留在0.5。通过做以下更改,我能够使这个解决方案工作(精度:100.0)

将网络更改为以下内容。(使用tanh,使用两个隐藏层)

带有tf.name_范围('dnn')的
:
hidden1=神经元层(X,n_hidden,name='hidden1',activation=tf.nn.tanh)
hidden2=神经元层(hidden1,n_隐藏,name='hidden2',activation=tf.nn.tanh)
logits=neuron\u层(hidden2,n\u输出,name='outputs')

n_hidden=7
n_epochs=5


注意:我不确定为什么它需要两个隐藏层。但显然,它需要在这种设置下工作。

嗯,谢谢,这更有意义,只是似乎有些事情发生了逆转,因为它变得更糟而不是更好。我用另一个发现编辑了我的解决方案。希望这对汉克斯有帮助!我从另一个问题中发现,我也应该让目标标签成为一个列表列表。也许这是问题的一部分(或全部)。