Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何训练神经网络预测Tensorflow中一个数的SQRT?_Tensorflow_Sqrt - Fatal编程技术网

如何训练神经网络预测Tensorflow中一个数的SQRT?

如何训练神经网络预测Tensorflow中一个数的SQRT?,tensorflow,sqrt,Tensorflow,Sqrt,我想训练NN预测tensorflow中一个数字的sqrt,下面是我的代码,但损失不能降到0,结果不正确,有什么问题 #!/usr/bin/env python import numpy as np import tensorflow as tf if __name__ == '__main__': dimension = 1 X = tf.placeholder(tf.float32, [None, dimension]) W = tf.Variable(tf.ran

我想训练NN预测
tensorflow
中一个数字的
sqrt
,下面是我的代码,但损失不能降到0,结果不正确,有什么问题

#!/usr/bin/env python

import numpy as np
import tensorflow as tf

if __name__ == '__main__':
    dimension = 1
    X = tf.placeholder(tf.float32, [None, dimension])
    W = tf.Variable(tf.random_normal([dimension, 100], stddev=0.01))
    b = tf.Variable(tf.zeros([100]))
    h1 = tf.nn.relu(tf.matmul(X, W) + b)

    W2 = tf.Variable(tf.random_normal([100, 50], stddev=0.01))
    b2 = tf.Variable(tf.zeros([50]))
    h2 = tf.nn.relu(tf.matmul(h1, W2) + b2)

    W3 = tf.Variable(tf.random_normal([50, 1], stddev=0.01))
    b3 = tf.Variable(tf.zeros([1]))
    y = tf.nn.relu(tf.matmul(h2, W3) + b3)

    Y = tf.placeholder(tf.float32, [None, dimension])

    cost = tf.reduce_mean(tf.pow(y - Y, 2))
    optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
    init = tf.global_variables_initializer()
    with tf.Session() as sess:
        sess.run(init)
        for epoch in range(1000):
            sx = np.random.rand(1000, 1)
            sy = np.sqrt(sx)
            sess.run(optimizer, feed_dict={X: sx, Y: sy})
            c = sess.run(cost, feed_dict={X: sx, Y: sy})
            print("Epoch:", '%04d' % (epoch + 1), "cost=", "%.03f" % c)
        sx = np.random.rand(10000, 1)
        sy = np.sqrt(sx)
        tc = sess.run(cost, feed_dict={X: sx, Y: sy})
        print("Testing cost=", tc)
        sx = np.array([[0.01], [0.5]])
        sy = np.sqrt(sx)
        print sy
        print sess.run(y, feed_dict={X: sx, Y: sy})
        print sess.run(cost, feed_dict={X: sx, Y: sy})
这是输出,它无法得到正确的结果:

...
('Epoch:', '0999', 'cost=', '0.502')
('Epoch:', '1000', 'cost=', '0.499')
('Testing cost=', 0.49828479)
[[ 0.1       ]
 [ 0.70710678]]
[[ 0.]
 [ 0.]]
0.255

我想有两件事需要理解:

1.避免在最后一层中使用RELU,因为它可能使渐变为零

2.NN很难对所有随机值进行外推。丢失可能会减少,但它不会为您提供新数据的正确结果

3.你必须选择数据的一个子集(我取了[1,50]范围内的整数),你可以观察它,对该子集进行适当的训练和预测,但它不能很好地外推到其他子集

import numpy as np
import tensorflow as tf

if __name__ == '__main__':
    dimension = 1
    X = tf.placeholder(tf.float32, [None, dimension])
    W = tf.Variable(tf.random_normal([dimension, 100], stddev=0.01))
    b = tf.Variable(tf.zeros([100]))
    h1 = tf.nn.relu(tf.matmul(X, W) + b)

    W2 = tf.Variable(tf.random_normal([100, 50], stddev=0.01))
    b2 = tf.Variable(tf.zeros([50]))
    h2 = tf.nn.relu(tf.matmul(h1, W2) + b2)

    W3 = tf.Variable(tf.random_normal([50, 1], stddev=0.01))
    b3 = tf.Variable(tf.zeros([1]))
    y = (tf.matmul(h2, W3) + b3)

    Y = tf.placeholder(tf.float32, [None, dimension])

    cost = tf.reduce_mean(tf.squared_difference(y,Y))
    optimizer = tf.train.GradientDescentOptimizer(0.001).minimize(cost)
    init = tf.global_variables_initializer()
    with tf.Session() as sess:
        sess.run(init)
        cap = 50
        for epoch in range(2000):
            sx = np.random.randint(cap,size=(100, 1))
            #sx = np.random.rand(100,1)
            sy = np.sqrt(sx)
            op,c = sess.run([optimizer,cost], feed_dict={X: sx, Y: sy})
            if epoch % 100 == 0:
                print("Epoch:", '%04d' % (epoch + 1), "cost=", "%.03f" % c)

        #sx = np.random.rand(10,1)
        sx = np.random.randint(cap,size=(10,1))
        sy = np.sqrt(sx)
        print "Input"
        print sx
        print "Expected Output"
        print sy
        print "Predicted Output"
        print sess.run(y, feed_dict={X: sx, Y: sy})
        print "Error"
        print sess.run(cost, feed_dict={X: sx, Y: sy})
输出日志:

('Epoch:', '0001', 'cost=', '25.258')
('Epoch:', '0101', 'cost=', '0.428')
('Epoch:', '0201', 'cost=', '0.452')
('Epoch:', '0301', 'cost=', '0.456')
('Epoch:', '0401', 'cost=', '0.320')
('Epoch:', '0501', 'cost=', '0.306')
('Epoch:', '0601', 'cost=', '0.312')
('Epoch:', '0701', 'cost=', '0.321')
('Epoch:', '0801', 'cost=', '0.268')
('Epoch:', '0901', 'cost=', '0.228')
('Epoch:', '1001', 'cost=', '0.264')
('Epoch:', '1101', 'cost=', '0.246')
('Epoch:', '1201', 'cost=', '0.241')
('Epoch:', '1301', 'cost=', '0.251')
('Epoch:', '1401', 'cost=', '0.141')
('Epoch:', '1501', 'cost=', '0.218')
('Epoch:', '1601', 'cost=', '0.213')
('Epoch:', '1701', 'cost=', '0.146')
('Epoch:', '1801', 'cost=', '0.186')
('Epoch:', '1901', 'cost=', '0.176')
Input
[[29]
 [39]
 [10]
 [ 2]
 [ 2]
 [17]
 [ 4]
 [26]
 [ 3]
 [31]]
Expected Output
[[ 5.38516481]
 [ 6.244998  ]
 [ 3.16227766]
 [ 1.41421356]
 [ 1.41421356]
 [ 4.12310563]
 [ 2.        ]
 [ 5.09901951]
 [ 1.73205081]
 [ 5.56776436]]
Predicted Output
[[ 5.11237049]
 [ 6.35184956]
 [ 2.75735927]
 [ 1.76557863]
 [ 1.76557863]
 [ 3.62499475]
 [ 2.01356125]
 [ 4.74052668]
 [ 1.88956988]
 [ 5.36026621]]
Error
0.0941391

@邓总理,这对你有帮助吗?所以你基本上是以你认为合适的方式提问的。现在还不清楚你的意思:“我想用NN训练sqrt”。试着向任何从事机器学习的人解释一下,使用神经网络训练平方根意味着什么。嗨@Salvadodali,我想,prime_tang想知道,“神经网络可以被训练来计算任何数字的平方根吗?如何?”。它就像任何其他函数(xor/xnor)的近似值一样。2.好的,这个“神经网络可以被训练来计算任何数字的平方根”是有意义的。如果是这种情况,那么yes NN可以将任何函数逼近到任何精度。但我从未想过这就是最初的问题想要的