Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x 使用tensorflow值安装线性二进制分类器时出错错误:没有为任何变量提供梯度,请检查图表_Python 3.x_Tensorflow - Fatal编程技术网

Python 3.x 使用tensorflow值安装线性二进制分类器时出错错误:没有为任何变量提供梯度,请检查图表

Python 3.x 使用tensorflow值安装线性二进制分类器时出错错误:没有为任何变量提供梯度,请检查图表,python-3.x,tensorflow,Python 3.x,Tensorflow,我在尝试使用阶跃函数和均方误差来拟合线性二元分类器时出错,而不是使用softmax和交叉熵损失。我有一个错误,我不能克服可能是由于形状不一致。我提供了一个代码示例。请帮忙 import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_classification as gen_data from sklearn.model_selection

我在尝试使用阶跃函数和均方误差来拟合线性二元分类器时出错,而不是使用softmax和交叉熵损失。我有一个错误,我不能克服可能是由于形状不一致。我提供了一个代码示例。请帮忙

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification as gen_data
from sklearn.model_selection import train_test_split
rng = np.random

# Setting hyperparameters
n_observations = 100
lr = 0.005
n_iter = 100

# Generate input data 
xs, ys = gen_data(n_features=2, n_redundant=0, n_informative=2, 
                  random_state=0, n_clusters_per_class=1)
# Split data into train and test
X_train, X_test, y_train, y_test = train_test_split(xs, ys, test_size=.4)
X_train = np.float32(X_train)
X_test = np.float32(X_test)

# Graph
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)

W = tf.Variable(np.float32(rng.randn(2)), name="weight")
b = tf.Variable(np.float32(rng.randn()), name="bias")

def step(x):
    is_greater = tf.greater(x, 0)
    as_float = tf.to_float(is_greater)
    doubled = tf.multiply(as_float, 2)

    return tf.subtract(doubled, 1)

Y_pred = step(tf.add(tf.multiply(X , W), b))

cost = tf.reduce_mean(tf.squared_difference(Y_pred, Y))
# Using built-in optimization algorithm to train the model:
train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cost)

sess = tf.Session()
sess.run(tf.initialize_all_variables())

for step in range(n_iter):
    sess.run(train_step, feed_dict={X:X_train, Y:y_train})
    print ("iter: {0}; weight: {1}; bias: {2}".format(step, 
                                                      sess.run(W), 
                                                      sess.run(b)))
这就是错误:

ValueErrorTraceback (most recent call last)
<ipython-input-17-5a0c4711802c> in <module>()
     26 
     27 # Using built-in optimization algorithm to train the model:
---> 28 train_step = tf.train.GradientDescentOptimizer(0.005).minimize(cost)
     29 
     30 # Using TF differentiation from scratch to implement a step-by-step optimizer

/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.pyc in minimize(self, loss, global_step, var_list, gate_gradients, aggregation_method, colocate_gradients_with_ops, name, grad_loss)
    405           "No gradients provided for any variable, check your graph for ops"
    406           " that do not support gradients, between variables %s and loss %s." %
--> 407           ([str(v) for _, v in grads_and_vars], loss))
    408 
    409     return self.apply_gradients(grads_and_vars, global_step=global_step,

ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'weight:0' shape=(2,) dtype=float64_ref>", "<tf.Variable 'bias:0' shape=() dtype=float32_ref>", "<tf.Variable 'weight_1:0' shape=(2,) dtype=float64_ref>", "<tf.Variable 'bias_1:0' shape=() dtype=float32_ref>", 
ValueErrorTraceback(最近一次调用上次)
在()
26
27#使用内置优化算法训练模型:
--->28列步进=tf.列梯度降尘器(0.005)。最小化(成本)
29
30#从头开始使用TF差异化实现一个分步优化程序
/最小化中的usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.pyc(self、loss、global\u step、var\u list、gate\u gradient、aggregation\u method、colocate\u gradients\u with\u ops、name、grad\u loss)
405“没有为任何变量提供梯度,请检查图形中的ops”
406“不支持变量%s和损失%s之间的梯度”。%
-->407([str(v)代表v,在梯度和变量中,损失))
408
409返回自。应用梯度(梯度和变量,全局步长=全局步长,
ValueError:没有为任何变量提供渐变,请检查图形中是否有不支持渐变的操作,在变量[“”、“”、“”、“”、“”、“”、,

您的训练数据在训练步骤之间没有变化。也就是说,每个训练步骤为
X
Y
提供相同的值:

for step in range(n_iter):
    sess.run(train_step, feed_dict={X:X_train, Y:y_train})

如果在训练步骤之间为
X
Y
设置不同的值,错误应该会消失。

谢谢@MatthewScarpino。我想你的意思是迭代地给出批次,但不可能用唯一的批次训练模型?我想我就是这么做的,这不对吗?训练过程需要多个批次获取不同的数据。如果反复输入相同的数据,优化器将无法正常工作。