Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/298.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python tensorflow线性回归误差放大_Python_Machine Learning_Tensorflow_Linear Regression_Gradient Descent - Fatal编程技术网

Python tensorflow线性回归误差放大

Python tensorflow线性回归误差放大,python,machine-learning,tensorflow,linear-regression,gradient-descent,Python,Machine Learning,Tensorflow,Linear Regression,Gradient Descent,我试图用tensorflow拟合一个非常简单的线性回归模型。然而,损失(均方误差)不是减少到零而是增加了 首先,我生成我的数据: x_data = np.random.uniform(high=10,low=0,size=100) y_data = 3.5 * x_data -4 + np.random.normal(loc=0, scale=2,size=100) 然后,我定义了计算图: X = tf.placeholder(dtype=tf.float32, shape=100) Y =

我试图用tensorflow拟合一个非常简单的线性回归模型。然而,损失(均方误差)不是减少到零而是增加了

首先,我生成我的数据:

x_data = np.random.uniform(high=10,low=0,size=100)
y_data = 3.5 * x_data -4 + np.random.normal(loc=0, scale=2,size=100)
然后,我定义了计算图:

X = tf.placeholder(dtype=tf.float32, shape=100)
Y = tf.placeholder(dtype=tf.float32, shape=100)
m = tf.Variable(1.0)
c = tf.Variable(1.0)
Ypred = m*X + c
loss = tf.reduce_mean(tf.square(Ypred - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.1)
train = optimizer.minimize(loss)
最后,运行100个时代:

steps = {}
steps['m'] = []
steps['c'] = []

losses=[]

for k in range(100):
    _m = session.run(m)
    _c = session.run(c)
    _l = session.run(loss, feed_dict={X: x_data, Y:y_data})
    session.run(train, feed_dict={X: x_data, Y:y_data})
    steps['m'].append(_m)
    steps['c'].append(_c)
    losses.append(_l)
然而,当我计算损失时,我得到:


也可以找到完整的代码。

学习率太高;0.001运行良好:

x_data = np.random.uniform(high=10,low=0,size=100)
y_data = 3.5 * x_data -4 + np.random.normal(loc=0, scale=2,size=100)
X = tf.placeholder(dtype=tf.float32, shape=100)
Y = tf.placeholder(dtype=tf.float32, shape=100)
m = tf.Variable(1.0)
c = tf.Variable(1.0)
Ypred = m*X + c
loss = tf.reduce_mean(tf.square(Ypred - Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=.001)
train = optimizer.minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as session:
    session.run(init)
    steps = {}
    steps['m'] = []
    steps['c'] = []

    losses=[]

    for k in range(100):
    _m = session.run(m)
    _c = session.run(c)
    _l = session.run(loss, feed_dict={X: x_data, Y:y_data})
    session.run(train, feed_dict={X: x_data, Y:y_data})
    steps['m'].append(_m)
    steps['c'].append(_c)
    losses.append(_l)

plt.plot(losses)
plt.savefig('loss.png')


(可能有用的参考资料:)

每当你看到你的成本随着时代的增加而单调增加时,这肯定是你的学习率过高的迹象。以每次1/10的学习率重复重新运行培训,直到成本函数随历次次数明显减少