Python Keras中的自定义损失函数(IoU损失函数)和梯度误差?
我是ML新手,我正在尝试实现我自己的损失函数(IoU损失函数),但是得到了一个关于梯度的错误(“没有为任何变量提供梯度”)。 注意,例如,我试图预测表示矩阵中行数的数字(例如y_pred=[1 5 3 9])。在损失函数中,我应该计算正确预测行的数量,然后将其除以y_true元素的总数。由于损失应该最小化,我在函数的末尾放了1-IoU 这是我的功能,更清楚地说,因为我不知道如何详细解释我的问题:Python Keras中的自定义损失函数(IoU损失函数)和梯度误差?,python,tensorflow,keras,gradient,loss-function,Python,Tensorflow,Keras,Gradient,Loss Function,我是ML新手,我正在尝试实现我自己的损失函数(IoU损失函数),但是得到了一个关于梯度的错误(“没有为任何变量提供梯度”)。 注意,例如,我试图预测表示矩阵中行数的数字(例如y_pred=[1 5 3 9])。在损失函数中,我应该计算正确预测行的数量,然后将其除以y_true元素的总数。由于损失应该最小化,我在函数的末尾放了1-IoU 这是我的功能,更清楚地说,因为我不知道如何详细解释我的问题: **def** loss_IoU(y_true, y_pred): intersection
**def** loss_IoU(y_true, y_pred):
intersection = []
#roundig the predicted values y_pred, since the values could be floats ex. [1.5 2 8.98 ...]
Roundy_pred = tf.round(y_pred) # (none, 6) with shape of none, and the size of the outputs 6.
intersection = tf.math.count_nonzero( Roundy_pred == y_true, axis=1, keepdims=True) # (none, 1) count the number of equal values for each row (i.e. for each predicted y and a true y)
union = y_true.shape[1] # which is 6 here
iou = intersection/union
returned_iou = 1 - iou
return returned_iou
下面是我得到的错误:
ValueError Traceback (most recent call last)
<ipython-input-176-631f68b50b34> in <module>()
9 history = model.fit(AS_Training_Set, Label_Training_Set,
10 steps_per_epoch=8, epochs=600, validation_data=
---> 11 (AS_Validation_Set, Label_Validation_Set))
10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
971 except Exception as e: # pylint:disable=broad-except
972 if hasattr(e, "ag_error_metadata"):
--> 973 raise e.ag_error_metadata.to_exception(e)
974 else:
975 raise
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:757 train_step
self.trainable_variables)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:2737 _minimize
trainable_variables))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:562 _aggregate_gradients
filtered_grads_and_vars = _filter_grads(grads_and_vars)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1271 _filter_grads
([v.name for _, v in grads_and_vars],))
ValueError: No gradients provided for any variable: ['conv2d_16/kernel:0', 'conv2d_16/bias:0', 'conv2d_17/kernel:0', 'conv2d_17/bias:0', 'batch_normalization_8/gamma:0', 'batch_normalization_8/beta:0', 'conv2d_18/kernel:0', 'conv2d_18/bias:0', 'conv2d_19/kernel:0', 'conv2d_19/bias:0', 'batch_normalization_9/gamma:0', 'batch_normalization_9/beta:0', 'conv2d_20/kernel:0', 'conv2d_20/bias:0', 'conv2d_21/kernel:0', 'conv2d_21/bias:0', 'batch_normalization_10/gamma:0', 'batch_normalization_10/beta:0', 'conv2d_22/kernel:0', 'conv2d_22/bias:0', 'conv2d_23/kernel:0', 'conv2d_23/bias:0', 'batch_normalization_11/gamma:0', 'batch_normalization_11/beta:0', 'dense_4/kernel:0', 'dense_4/bias:0', 'dense_5/kernel:0', 'dense_5/bias:0'].
ValueError回溯(最近一次调用)
在()
9历史=模型拟合(作为训练集、标签训练集、,
10步/epoch=8,epoch=600,验证数据=
--->11(作为验证集、标签验证集)
10帧
/包装器中的usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py(*args,**kwargs)
971例外情况为e:#pylint:disable=broad except
972如果hasattr(e,“AGU错误元数据”):
-->973将e.ag\u错误\u元数据引发到\u异常(e)
974其他:
975提高
ValueError:在用户代码中:
/usr/local/lib/python3.6/dist包/tensorflow/python/keras/engine/training.py:806 train_函数*
返回步骤_函数(self、迭代器)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796-step_函数**
输出=模型。分配策略。运行(运行步骤,参数=(数据,)
/usr/local/lib/python3.6/dist包/tensorflow/python/distribute/distribute_lib.py:1211运行
返回self.\u扩展。为每个\u副本调用\u(fn,args=args,kwargs=kwargs)
/usr/local/lib/python3.6/dist包/tensorflow/python/distribute/distribute_lib.py:2585调用每个副本
返回自我。为每个副本(fn、ARG、kwargs)调用
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute\u-lib.py:2945\u为每个复制副本调用
返回fn(*args,**kwargs)
/usr/local/lib/python3.6/dist包/tensorflow/python/keras/engine/training.py:789运行步骤**
输出=型号列车步进(数据)
/usr/local/lib/python3.6/dist包/tensorflow/python/keras/engine/training.py:757 train\u步骤
可自我训练的变量)
/usr/local/lib/python3.6/dist包/tensorflow/python/keras/engine/training.py:2737
可训练变量)
/usr/local/lib/python3.6/dist packages/tensorflow/python/keras/optimizer\u v2/optimizer\u v2.py:562\u aggregate\u gradients
过滤的梯度和变量=\u过滤梯度(梯度和变量)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer\u v2/optimizer\u v2.py:1271\u filter\u grads
([v.name代表u,v在grads和vars中],)
ValueError:没有为任何变量提供渐变:['conv2d_16/内核:0'、'conv2d_16/偏差:0'、'conv2d_17/内核:0'、'conv2d_17/偏差:0'、'batch_normalization_8/gamma:0'、'batch_normalization_8/beta:0'、'conv2d_18/偏差:0'、'conv2d_19/内核:0'、'conv2d_19/偏差:0'、'batch_normalization_9/伽马:0'、'batch_normalization_9/beta:0'、'2d_normalization_20/内核:0'、'convu 20/偏差:0'、'_21/内核:0',conv2d\u 21/偏差:0',批处理标准化\u 10/伽马:0',批处理标准化\u 10/β:0',conv2d\u 22/内核:0',conv2d\u 22/偏差:0',conv2d\u 23/内核:0',conv2d\u 23/偏差:0',批处理标准化\u 11/伽马:0',批处理标准化\u 11/β:0',稠密\u 4/内核:0',稠密\u 4/偏差:0',稠密\u 5/偏差:0']。
有人能帮我解决这个错误吗?我试图修复它,但没有任何结果,我不知道问题是否出在舍入函数上
从现在开始感谢。对于IoU损失函数,我将此函数用于Pascal VOC数据集
def IoU_loss(y_true, y_pred):
nb_classes = K.int_shape(y_pred)[-1]
iou = []
pred_pixels = K.argmax(y_pred, axis=-1)
for i in range(0, nb_classes): # exclude first label (background) and last label (void)
true_labels = K.equal(y_true[:, :, 0], i)
pred_labels = K.equal(pred_pixels, i)
inter = tf.cast(true_labels & pred_labels, dtype=tf.int32)
union = tf.cast(true_labels | pred_labels, dtype=tf.int32)
legal_batches = K.sum(tf.cast(true_labels, dtype=tf.int32), axis=1) > 0
ious = K.sum(inter, axis=1) / K.sum(union, axis=1)
iou.append(K.mean(ious[legal_batches]))
iou = tf.stack(iou)
legal_labels = ~tf.math.is_nan(iou)
iou = iou[legal_labels]
return K.mean(iou)
它需要一些修改,但也适用于您。非常感谢您,我会尝试一下,希望它适用于我。