Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/349.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python tensorflow的AdamOptimizer和GradientDescentOptimizer无法拟合简单数据_Python_Tensorflow_Gradient Descent - Fatal编程技术网

Python tensorflow的AdamOptimizer和GradientDescentOptimizer无法拟合简单数据

Python tensorflow的AdamOptimizer和GradientDescentOptimizer无法拟合简单数据,python,tensorflow,gradient-descent,Python,Tensorflow,Gradient Descent,类似问题: 我正在试用TensorFlow。我生成了简单的数据,这些数据是线性可分的,并试图拟合一个线性方程。这是代码 np.random.seed(2010) n = 300 x_data = np.random.random([n, 2]).tolist() y_data = [[1., 0.] if v[0]> 0.5 else [0., 1.] for v in x_data] x = tf.placeholder(tf.float32, [None, 2]) W = tf.V

类似问题:

我正在试用TensorFlow。我生成了简单的数据,这些数据是线性可分的,并试图拟合一个线性方程。这是代码

np.random.seed(2010)
n = 300
x_data = np.random.random([n, 2]).tolist()
y_data = [[1., 0.] if v[0]> 0.5 else [0., 1.] for v in x_data]

x = tf.placeholder(tf.float32, [None, 2]) 
W = tf.Variable(tf.zeros([2, 2]))
b = tf.Variable(tf.zeros([2]))
y = tf.sigmoid(tf.matmul(x , W) + b)

y_ = tf.placeholder(tf.float32, [None, 2]) 
cross_entropy = -tf.reduce_sum(y_ * tf.log(tf.clip_by_value(y, 1e-9, 1)))
train_step = tf.train.AdamOptimizer(0.01).minimize(cross_entropy)

correct_predict = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 
accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32))

s = tf.Session()
s.run(tf.initialize_all_variables())

for i in range(10):
        s.run(train_step, feed_dict = {x: x_data, y_: y_data})
        print(s.run(accuracy, feed_dict = {x: x_data, y_: y_data}))

print(s.run(accuracy, feed_dict = {x: x_data, y_: y_data}), end=",")
我得到以下输出:

0.536667,0.46,0.46,0.46,0.46,0.46,0.46,0.46,0.46,0.46,0.46,0.46,0.46

就在第一次迭代之后,它在
0.46处被击中

图为:

然后我将代码更改为使用梯度下降:

train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
现在我得到了以下结果:0.54,0.54,0.63,0.70,0.75,0.8,0.84,0.89,0.92,0.94,0.94

图为:

我的问题是:

1) 为什么AdamOptimizer会失败

2) 如果问题是学习速率,或者我需要调整的其他参数,我通常如何调试它们

3) 我运行了50次梯度下降(我运行了10次以上),并每5次迭代打印一次精度,这是输出:

0.54,0.8,0.95,0.96,0.92,0.89,0.87,0.84,0.81,0.79,0.77

很明显,它开始出现分歧,看起来问题在于固定的学习率(在一个点后会出现过度调整)。我说得对吗

4) 在这个玩具示例中,可以做些什么来获得更好的贴合感。理想情况下,其精度应为1.0,因为数据是线性可分离的

[编辑]

根据@Yaroslav的要求,以下是用于绘图的代码

xx = [v[0] for v in x_data]
yy = [v[1] for v in x_data]
x_min, x_max = min(xx) - 0.5, max(xx) + 0.5 
y_min, y_max = min(yy) - 0.5, max(yy) + 0.5 
xxx, yyy = np.meshgrid(np.arange(x_min, x_max, 0.02), np.arange(y_min, y_max, 0.02))
pts = np.c_[xxx.ravel(), yyy.ravel()].tolist()
# ---> Important
z = s.run(tf.argmax(y, 1), feed_dict = {x: pts})
z = np.array(z).reshape(xxx.shape)
plt.pcolormesh(xxx, yyy, z)
plt.scatter(xx, yy, c=['r' if v[0] == 1 else 'b' for v in y_data], edgecolor='k', s=50)
plt.show()

TLDR;你的损失是错误的。在不降低精度的情况下,损耗为零

tf.reset_default_graph()
np.random.seed(2010)
n = 300
x_data = np.random.random([n, 2]).tolist()
y_data = [[1., 0.] if v[0]> 0.5 else [0., 1.] for v in x_data]

x = tf.placeholder(tf.float32, [None, 2]) 
W = tf.Variable(tf.zeros([2, 2]))
b = tf.Variable(tf.zeros([2]))
y = tf.nn.softmax(tf.matmul(x , W) + b)

y_ = tf.placeholder(tf.float32, [None, 2]) 
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
regularizer = tf.reduce_sum(tf.square(y))
train_step = tf.train.AdamOptimizer(1.0).minimize(cross_entropy+regularizer)

correct_predict = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 
accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32))

s = tf.Session()
s.run(tf.initialize_all_variables())

for i in range(30):
        s.run(train_step, feed_dict = {x: x_data, y_: y_data})
        cost1,cost2=s.run([cross_entropy,accuracy], feed_dict = {x: x_data, y_: y_data})
        print(cost1, cost2)
问题是你的概率没有标准化。如果你看一下你的损失,它会下降,但是
y[:0]
y[:1]
的概率都会变为1,所以argmax没有意义

传统的解决方案是只使用1个自由度而不是2个自由度,因此第一类的概率是
sigmoid(y)
,第二类的概率是
1-sigmoid(y)
,所以交叉熵类似于
-y[0]log(sigmoid(y0))-y[1]log(1-sigmoid(y0))

或者,您可以更改代码,即使用
tf.nn.softmax
而不是
tf.sigmoid
。这除以两个概率之和,因此优化器无法通过同时将两个概率都设置为1来减少损失

以下是
0.99666673
的准确度

tf.reset_default_graph()
np.random.seed(2010)
n = 300
x_data = np.random.random([n, 2]).tolist()
y_data = [[1., 0.] if v[0]> 0.5 else [0., 1.] for v in x_data]

x = tf.placeholder(tf.float32, [None, 2]) 
W = tf.Variable(tf.zeros([2, 2]))
b = tf.Variable(tf.zeros([2]))
y = tf.nn.softmax(tf.matmul(x , W) + b)

y_ = tf.placeholder(tf.float32, [None, 2]) 
cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
regularizer = tf.reduce_sum(tf.square(y))
train_step = tf.train.AdamOptimizer(1.0).minimize(cross_entropy+regularizer)

correct_predict = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 
accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32))

s = tf.Session()
s.run(tf.initialize_all_variables())

for i in range(30):
        s.run(train_step, feed_dict = {x: x_data, y_: y_data})
        cost1,cost2=s.run([cross_entropy,accuracy], feed_dict = {x: x_data, y_: y_data})
        print(cost1, cost2)

PS:你能分享你用来绘制上述图的代码吗?

谢谢你的详细解释,解决了这个问题。用于绘图的代码:[由于格式问题而编辑,我正在将代码添加到问题中。另外,我可以问一下您是如何调试我的代码的吗?查看您修改的代码,看起来您是通过打印交叉熵和精度来解决的,对吗?是的。首先我检查了最小化是否有效,然后试图找出丢失和精度之间存在不匹配的原因