Python Tensorflow与nx1整数输入不收敛(列向量)

Python Tensorflow与nx1整数输入不收敛(列向量),python,tensorflow,Python,Tensorflow,我不熟悉Tensorflow和机器学习。我试图修改basic以模拟成批输入,但无法使其收敛 如果我将x_数据更改为在[0,1]范围内,则可以正确计算W x_data = np.random.rand(numelements,1).astype(np.float32) 我的代码有问题吗?这是一份副本: import tensorflow as tf import numpy as np # number of training samples numelements = 100 # defi

我不熟悉Tensorflow和机器学习。我试图修改basic以模拟成批输入,但无法使其收敛

如果我将x_数据更改为在[0,1]范围内,则可以正确计算W

x_data = np.random.rand(numelements,1).astype(np.float32)
我的代码有问题吗?这是一份副本:

import tensorflow as tf
import numpy as np

# number of training samples
numelements = 100

# define input, and labled values
# note the inptu and output are actually scalar value
#x_data = np.random.rand(numelements,1).astype(np.float32)
x_data = np.random.randint(0, 10, size=(numelements,1)).astype(np.float32)
y_data = x_data * 10

# Try to find values for W and b that compute y_data = W * x + b
x = tf.placeholder(tf.float32, [None, 1])
W = tf.Variable(tf.zeros([1]))
b = tf.Variable(tf.zeros([1]))
y = tf.mul(x, W) + b

# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)

# Before starting, initialize the variables.  We will 'run' this first.
init = tf.global_variables_initializer()

# Launch the graph.
sess = tf.Session()
sess.run(init)

# Fit the line.
for step in range(81):
sess.run(train, feed_dict={x: x_data})
if step % 20 == 0:
    print(step, sess.run(W), sess.run(b))

我的朋友帮我发现我的梯度下降训练率太高了。使用本文中的技巧,我可以清楚地看到损失越来越大,最终开始溢出

我将学习率改为0.005,它开始收敛