Python Tensorflow中一个简单的自定义损失函数的问题

Python Tensorflow中一个简单的自定义损失函数的问题,python,tensorflow,neural-network,loss-function,Python,Tensorflow,Neural Network,Loss Function,我刚刚编写了以下代码: import tensorflow as tf m = 5 sigma = 0.2 rf = 0.1 learning_rate = 0.5 batch_size = 32 x_train = tf.compat.v1.random_normal(shape = (batch_size, m)) x_train = tf.exp((rf - 0.5 * sigma ** 2) + sigma * x_train) W1 = tf.Variable(tf.compat.

我刚刚编写了以下代码:

import tensorflow as tf
m = 5
sigma = 0.2
rf = 0.1
learning_rate = 0.5

batch_size = 32
x_train = tf.compat.v1.random_normal(shape = (batch_size, m))
x_train = tf.exp((rf - 0.5 * sigma ** 2) + sigma * x_train)

W1 = tf.Variable(tf.compat.v1.random_normal([m, m+10], stddev = 0.03), name = 'W1')
b1 = tf.Variable(tf.compat.v1.random_normal([m+10]), name = 'b1')
W2 = tf.Variable(tf.compat.v1.random_normal([m+10,m], stddev = 0.03), name = 'W2')
b2 = tf.Variable(tf.compat.v1.random_normal([m]), name = 'b2')

hidden_out = tf.add(tf.matmul(x_train,W1), b1)
hidden_out = tf.nn.relu(hidden_out)
b = tf.nn.softmax(tf.add(tf.matmul(hidden_out, W2), b2))
c = tf.multiply(b, x_train)

to_minimize = tf.reduce_sum(tf.multiply(b, x_train), axis = 1)

optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate = learning_rate).minimize(-tf.reduce_mean(to_minimize))
首先,我定义了一个张量
x\u列
。之后,我开始定义一个NN结构,它的输出为
b

我的目标是训练神经网络,以便最大化每个样本的输入
x
和神经网络输出之间的标量积

因此,我定义了向量
c
,然后尝试最小化
-tf.reduce\u mean(to\u minimize)

但是,代码给出了一个输出

Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
    pydev_imports.execfile(filename, global_vars, local_vars)  # execute the script
  File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/Users/Marcello/Library/Preferences/PyCharm2019.2/scratches/scratch_1.py", line 27, in <module>
    optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate = learning_rate).minimize(-tf.reduce_mean(to_minimize))
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/training/optimizer.py", line 403, in minimize
    grad_loss=grad_loss)
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_core/python/training/optimizer.py", line 481, in compute_gradients
    "`loss` passed to Optimizer.compute_gradients should "
RuntimeError: `loss` passed to Optimizer.compute_gradients should be a function when eager execution is enabled.
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_bundle/pydev_umd.py”,第197行,在runfile中
pydev_imports.execfile(文件名、全局变量、本地变量)#执行脚本
文件“/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py”,execfile中第18行
exec(编译(内容+“\n”,文件,'exec'),全局,loc)
文件“/Users/Marcello/Library/Preferences/PyCharm2019.2/scratches/scratch_1.py”,第27行,在
优化器=tf.compat.v1.train.AdamOptimizer(学习速率=学习速率).minimize(-tf.reduce\u mean(to\u minimize))
文件“/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site packages/tensorflow_core/Python/training/optimizer.py”,第403行
梯度损失=梯度损失)
文件“/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site packages/tensorflow\u core/Python/training/optimizer.py”,第481行,在compute\u渐变中
“`loss`传递给Optimizer.compute_梯度应该”
RuntimeError:'loss'传递给Optimizer.compute_gradients的函数应该是启用了急切执行时的函数。

我不知道这意味着什么,也不知道如何修复它。提前谢谢。

您在使用TF2吗?@thushv89是的,我在使用