Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/322.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 测试简单线性回归时Tensorflow中出现意外错误_Python_Tensorflow - Fatal编程技术网

Python 测试简单线性回归时Tensorflow中出现意外错误

Python 测试简单线性回归时Tensorflow中出现意外错误,python,tensorflow,Python,Tensorflow,我从一本名为《Tensorflow简明手册》的书中学习Tensorflow,有一段代码使用Tensorflow进行线性回归,但我在测试时得到了AttributeError import numpy as np X_raw = np.array([2013, 2014, 2015, 2016, 2017]) y_raw = np.array([12000, 14000, 15000, 16500, 17500]) X = (X_raw - X_raw.min()) / (X_raw.max() -

我从一本名为《Tensorflow简明手册》的书中学习Tensorflow,有一段代码使用Tensorflow进行线性回归,但我在测试时得到了AttributeError

import numpy as np
X_raw = np.array([2013, 2014, 2015, 2016, 2017])
y_raw = np.array([12000, 14000, 15000, 16500, 17500])
X = (X_raw - X_raw.min()) / (X_raw.max() - X_raw.min())
y = (y_raw - y_raw.min()) / (y_raw.max() - y_raw.min())

import tensorflow as tf
X = tf.constant(X)
y = tf.constant(y)
a = tf.get_variable('a', dtype=tf.float64, shape=[], initializer=tf.zeros_initializer)
b = tf.get_variable('b', dtype=tf.float64, shape=[], initializer=tf.zeros_initializer)
variables = [a, b]
num_epoch = 10000
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-3)
for e in range(num_epoch):
    with tf.GradientTape() as tape:
        y_pred = a * X + b
        loss = 0.5 * tf.reduce_sum(tf.square(y_pred - y))
    grads = tape.gradient(loss, variables)
    optimizer.apply_gradients(grads_and_vars=zip(grads, variables))
有关错误的信息:

Traceback (most recent call last):
  File "test.py", line 19, in <module>
    grads = tape.gradient(loss, variables)
  File "C:\Python35\lib\site-packages\tensorflow\python\eager\backprop.py", line 858, in gradient
    output_gradients=output_gradients)
  File "C:\Python35\lib\site-packages\tensorflow\python\eager\imperative_grad.py", line 63, in imperative_grad
    tape._tape, vspace, target, sources, output_gradients)  # pylint: disable=protected-access
AttributeError: 'Variable' object has no attribute '_id'
回溯(最近一次呼叫最后一次):
文件“test.py”,第19行,在
梯度=磁带梯度(损耗、变量)
文件“C:\Python35\lib\site packages\tensorflow\python\eager\backprop.py”,第858行,渐变
输出梯度=输出梯度)
文件“C:\Python35\lib\site packages\tensorflow\python\eager\private\u grad.py”,命令式\u grad中的第63行
磁带。_磁带、vspace、目标、源、输出_梯度)35; pylint:disable=受保护的访问
AttributeError:“Variable”对象没有属性“\u id”

我不知道为什么会出现错误,我无法调试它。

下面的方法很有效,我认为直接使用
numpy
数组会更直观一些

import numpy as np
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tf.reset_default_graph()
tf.enable_eager_execution()

X_raw = np.array([2013, 2014, 2015, 2016, 2017])
y_raw = np.array([12000, 14000, 15000, 16500, 17500])
X = (X_raw - X_raw.min()) / (X_raw.max() - X_raw.min())
Y = (y_raw - y_raw.min()) / (y_raw.max() - y_raw.min())

x = tf.constant(X)
y = tf.constant(Y)
a = tfe.Variable(0.0, name='a', dtype=tf.float64)
b = tfe.Variable(0.0, name='b', dtype=tf.float64)

def loss(x, y):
    return 0.5 * tf.reduce_sum(tf.square(a * x + b - y))

optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-3)

num_epoch = 1000
for e in range(num_epoch):
    with tf.GradientTape() as t:
        l = loss(x, y)
    grads = t.gradient(l, [a, b])
    optimizer.apply_gradients(grads_and_vars=zip(grads, [a, b]))
输出

In [2]: a
Out[2]: <tf.Variable 'a:0' shape=() dtype=float64, numpy=0.5352067771256968>

In [3]: b
Out[3]: <tf.Variable 'b:0' shape=() dtype=float64, numpy=0.30109001612382946>
[2]中的
:a
出[2]:
在[3]中:
出[3]:

建议阅读在
tf.session

中构建图表的内容。您是否正在使用
tf.enable\u eager\u execution()
?是!我忘了!非常感谢。请参见编辑。我以为你在用1分做5D回归。我已经更新了