Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 优化器将错误最小化:';浮动';对象没有属性';数据类型';_Python_Tensorflow_Optimization_Gradient_Dtype - Fatal编程技术网

Python 优化器将错误最小化:';浮动';对象没有属性';数据类型';

Python 优化器将错误最小化:';浮动';对象没有属性';数据类型';,python,tensorflow,optimization,gradient,dtype,Python,Tensorflow,Optimization,Gradient,Dtype,我是tensorflow的初学者。tensorflow 2.0在梯度计算中存在一些问题。有人能帮我吗 这是我的密码。错误提示为: if not t.dtype.is_floating: AttributeError: 'float' object has no attribute 'dtype' 我试过: w = tf.Variable([1.0,1.0],dtype = tf.float32) 消息更改为: TypeError: 'tensorflow.python.framework.o

我是tensorflow的初学者。tensorflow 2.0在梯度计算中存在一些问题。有人能帮我吗

这是我的密码。错误提示为:

if not t.dtype.is_floating:
AttributeError: 'float' object has no attribute 'dtype'
我试过:

w = tf.Variable([1.0,1.0],dtype = tf.float32)
消息更改为:

TypeError: 'tensorflow.python.framework.ops.EagerTensor' object is not callable
将tensorflow导入为tf
将numpy作为np导入
列车X=np.linspace(-1,1100)
序列Y=2*序列X+np.random.randn(*序列X形状)*0.33+10
#w=tf.Variable([1.0,1.0],dtype=tf.float32)
w=[1.0,1.0]https://www.cybertec-postgresql.com/en/?p=9102&preview=true
opt=tf.keras.optimizers.SGD(0.1)
mse=tf.keras.loss.MeanSquaredError()
对于范围(20)内的i:
打印(“历元:,i,“w:,w”)
使用tf.GradientTape()作为磁带:
logit=w[0]*列车X+w[1]
损失=mse(列车Y,后勤)
w=选择最小化(损失,变量列表=w)

我不知道如何修复它。谢谢您的评论。

您没有正确使用
GradientTape
。我已经演示了代码应该如何应用它。 我创建了一个单单元密集层模型,它模拟了
w
变量

import tensorflow as tf
import numpy as np
train_X = np.linspace(-1, 1, 100)
train_X = np.expand_dims(train_X, axis=-1)
print(train_X.shape)    # (100, 1)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10
print(train_Y.shape)    # (100, 1)

# First create a  model with one unit of dense and one bias
input = tf.keras.layers.Input(shape=(1,))
w = tf.keras.layers.Dense(1)(input)   # use_bias is True by default
model = tf.keras.Model(inputs=input, outputs=w)

opt=tf.keras.optimizers.SGD(0.1)
mse=tf.keras.losses.MeanSquaredError()

for i in range(20):
    print('Epoch: ', i)
    with tf.GradientTape() as grad_tape:
        logits = model(train_X, training=True)
        model_loss = mse(train_Y, logits)
        print('Loss =', model_loss.numpy())

    gradients = grad_tape.gradient(model_loss, model.trainable_variables)
    opt.apply_gradients(zip(gradients, model.trainable_variables))

您没有正确使用
GradientTape
。我已经演示了代码应该如何应用它。 我创建了一个单单元密集层模型,它模拟了
w
变量

import tensorflow as tf
import numpy as np
train_X = np.linspace(-1, 1, 100)
train_X = np.expand_dims(train_X, axis=-1)
print(train_X.shape)    # (100, 1)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10
print(train_Y.shape)    # (100, 1)

# First create a  model with one unit of dense and one bias
input = tf.keras.layers.Input(shape=(1,))
w = tf.keras.layers.Dense(1)(input)   # use_bias is True by default
model = tf.keras.Model(inputs=input, outputs=w)

opt=tf.keras.optimizers.SGD(0.1)
mse=tf.keras.losses.MeanSquaredError()

for i in range(20):
    print('Epoch: ', i)
    with tf.GradientTape() as grad_tape:
        logits = model(train_X, training=True)
        model_loss = mse(train_Y, logits)
        print('Loss =', model_loss.numpy())

    gradients = grad_tape.gradient(model_loss, model.trainable_variables)
    opt.apply_gradients(zip(gradients, model.trainable_variables))

非常感谢您的快速回答。我会好好研究你的答案。非常感谢你的快速回答。我会好好研究你的答案。