Tensorflow 输出相对于输入具有梯度的自定义损失函数

Tensorflow 输出相对于输入具有梯度的自定义损失函数,tensorflow,keras,gradient,loss-function,Tensorflow,Keras,Gradient,Loss Function,我有一个模型,看起来像这样: # --- Create model Input1 = keras.layers.Input(shape=1,name='Vgs') Input2 = keras.layers.Input(shape=1,name='Vds') Vgs_Vds = keras.layers.Concatenate(axis=1)([Input1, Input2]) hidden1 = keras.layers.Dense(15, activation="tanh&

我有一个模型,看起来像这样:

    # --- Create model
Input1 = keras.layers.Input(shape=1,name='Vgs')
Input2 = keras.layers.Input(shape=1,name='Vds')
Vgs_Vds = keras.layers.Concatenate(axis=1)([Input1, Input2])
hidden1 = keras.layers.Dense(15, activation="tanh",name='Activation')(Vgs_Vds)
Output = keras.layers.Dense(1,name='Charge')(hidden1)
Output1 = keras.layers.Dense(1,name='dQ_dVgs')(Output)
Output2 = keras.layers.Dense(1,name='dQ_dVds')(Output)
model = keras.Model(inputs=[Input1,Input2] , outputs= [Output1,Output2])
model.summary()
keras.utils.plot_model(model, "my_first_model.png")

print(model.layers[5].name)
def grad(input_tensor, output_tensor):
    return keras.layers.Lambda( lambda z: keras.backend.gradients( z[ 0 ], z[ 1 ] ), output_shape = [1] )( [ output_tensor, input_tensor ] )

def custom_loss_1(input_tensor, output_tensor):

    def custom_loss(y_true, y_pred):
        # mse_loss = keras.losses.mean_squared_error(y_true, y_pred)
        derivative_loss = keras.losses.mean_squared_error(y_true = y_true, y_pred = grad(input_tensor, output_tensor)[0])
        return derivative_loss

    return custom_loss

def custom_loss_2(input_tensor, output_tensor):

    def custom_loss(y_true, y_pred):
        # mse_loss = keras.losses.mean_squared_error(y_true, y_pred)
        derivative_loss = keras.losses.mean_squared_error(y_true = y_true, y_pred = grad(input_tensor, output_tensor)[0])
        return derivative_loss

    return custom_loss

# --- Configure learning process
model.compile(
        optimizer=keras.optimizers.SGD(0.004),
        loss={'dQ_dVgs' : custom_loss_1(model.layers[0].input, model.layers[4].output),
        'dQ_dVds' : custom_loss_2(model.layers[1].input, model.layers[4].output)},
        metrics=['MeanSquaredError'])

# --- Train from dataset
model.fit([x,y],[Cgg_Flatten,-Cgd_Flatten], epochs=1000)
我想构建一个自定义损失函数来比较(dOutput1/dInput1)和y_true。我怎样才能做到? 我不知道如何在自定义损失函数中使用梯度函数来计算损失

如果我运行这段代码,它会抛出一个错误,告诉我: 对于渐变,变量具有
None
。请确保所有操作都定义了梯度(即可微)。无梯度的普通操作:K.argmax、K.round、K.eval

如果我将mse_损耗添加到我的自定义损耗函数中,它会运行得非常平稳。但我想要的只是衍生损失


谢谢。

你想做什么?我从未见过在损失函数中加入梯度。这是一篇论文吗?如果是,你能提供一个链接吗?如果你想计算损失的梯度,你不需要做任何“def grad”函数。