Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/327.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 所有Tensorflow输出均为nan_Python_Tensorflow_Regression_Non Linear Regression - Fatal编程技术网

Python 所有Tensorflow输出均为nan

Python 所有Tensorflow输出均为nan,python,tensorflow,regression,non-linear-regression,Python,Tensorflow,Regression,Non Linear Regression,在他们的网站上,tf给出了执行线性回归的模型代码。然而,我想玩一玩,看看是否能让它做二次回归。为此,我添加了一个tf.Variable a,将其放入模型中,然后修改输出,告诉我它得到了什么值 结果如下: import tensorflow as tf # Model parameters A = tf.Variable([.3], dtype=tf.float32) W = tf.Variable([.3], dtype=tf.float32) b = tf.Variable([-.3], d

在他们的网站上,tf给出了执行线性回归的模型代码。然而,我想玩一玩,看看是否能让它做二次回归。为此,我添加了一个tf.Variable a,将其放入模型中,然后修改输出,告诉我它得到了什么值

结果如下:

import tensorflow as tf

# Model parameters
A = tf.Variable([.3], dtype=tf.float32)
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
q_model = A * (x**2) + W * x + b
y = tf.placeholder(tf.float32)

# loss
loss = tf.reduce_sum(tf.square(q_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)

# training data
x_train = [0, 1, 2, 3, 4]
y_train = [0, 1, 4, 9, 16]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
  sess.run(train, {x: x_train, y: y_train})

# evaluate training accuracy
curr_A, curr_W, curr_b, curr_loss = sess.run([A, W, b, loss], {x: x_train, y: y_train})
print("A: %s W: %s b: %s loss: %s"%(curr_A, curr_W, curr_b, curr_loss))

你们认为这里的问题是什么?是在椅子和键盘之间吗?

如果每次重复打印
A
W
b
的值,您将看到它们是交替的(即正负值紧随其后)。这通常是由于学习率高。在您的示例中,您应该能够通过将学习率降低到大约
0.001
来修复此行为:

A: [ nan] W: [ nan] b: [ nan] loss: nan
有了这个学习率,我的损失减少了,而
a
趋向于1,
W
b
趋向于零,正如预期的那样

optimizer = tf.train.GradientDescentOptimizer(0.001)
A: [ 0.7536] W: [ 0.42800003] b: [-0.26100001] loss: 7.86113
A: [ 0.8581112] W: [ 0.45682004] b: [-0.252166] loss: 0.584708
A: [ 0.88233441] W: [ 0.46283191] b: [-0.25026742] loss: 0.199126
...
A: [ 0.96852171] W: [ 0.1454313] b: [-0.11387932] loss: 0.0183883
A: [ 0.96855479] W: [ 0.14527865] b: [-0.11376046] loss: 0.0183499
A: [ 0.96858788] W: [ 0.14512616] b: [-0.11364172] loss: 0.0183113
A: [ 0.9686209] W: [ 0.14497384] b: [-0.1135231] loss: 0.0182731