Python 所有梯度值计算为;无”;如果手动使用BCE损耗
我正在研究一个多输出模型,在计算总损失之前,我需要权衡所有输出损失。我有一个定制的Python 所有梯度值计算为;无”;如果手动使用BCE损耗,python,tensorflow,keras,deep-learning,Python,Tensorflow,Keras,Deep Learning,我正在研究一个多输出模型,在计算总损失之前,我需要权衡所有输出损失。我有一个定制的模型。安装()以实现此目的 由于我需要计算所有四个输出的样本损失,并在应用权重后融合这些样本损失,因此我定制了标准代码。现在,损失是按样本计算的,但在计算梯度时,所有梯度值都计算为“无”。我也试着放磁带。手表(丢失),但它不起作用。请帮我解决这个问题 class CustomModel(keras.Model): def train_step(self, data): print(tf.ex
模型。安装()
以实现此目的
由于我需要计算所有四个输出的样本损失,并在应用权重后融合这些样本损失,因此我定制了标准代码。现在,损失是按样本计算的,但在计算梯度时,所有梯度值都计算为“无”。我也试着放磁带。手表(丢失)
,但它不起作用。请帮我解决这个问题
class CustomModel(keras.Model):
def train_step(self, data):
print(tf.executing_eagerly())
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
alpha = 0.1
loss = 0
y_pred_all = []
with tf.GradientTape() as tape:
bce = tf.keras.losses.BinaryCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
for spl in range(1 if np.shape(x)[0] == None else np.shape(x)[0]):
tape.watch(loss)
tape.watch(loss_mean)
tape.watch(loss_element)
x_spl = np.reshape(x[spl], (1, np.shape(x)[1], np.shape(x)[2], np.shape(x)[3]))
y_pred = self(x_spl, training=True) # Forward pass
y_pred_all.append(y_pred)
loss_element = bce(y[spl], y_pred)
loss_mean = [np.mean(loss_element[0]), np.mean(loss_element[1]), np.mean(loss_element[2]), np.mean(loss_element[3])]
id = np.argmin(loss_mean)
for i, ele in enumerate(loss_mean):
if i == id:
loss_mean[i] *= 1
else:
loss_mean[i] *= alpha
loss = loss + np.sum(loss_mean)
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, y_pred_all)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
更新
我做了@rvinas
现在它正在计算梯度,没有任何错误,但我不确定我所做的更改是否正确:
class CustomModel(keras.Model):
def train_step(self, data):
# print(tf.executing_eagerly())
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
alpha = 0.1
loss = tf.Variable(0, dtype='float32')
y_pred_all = []
with tf.GradientTape() as tape:
bce = tf.keras.losses.BinaryCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
for spl in tf.range(1 if tf.shape(x)[0] == None else tf.shape(x)[0]):
loss_mean=tf.convert_to_tensor([])
x_spl = tf.reshape(x[spl], (1, tf.shape(x)[1], tf.shape(x)[2], tf.shape(x)[3]))
y_pred = self(x_spl, training=True) # Forward pass
y_pred_all.append(y_pred)
loss_element = bce(y[spl], y_pred)
loss_mean = [tf.reduce_mean(loss_element[0]), tf.reduce_mean(loss_element[1]), tf.reduce_mean(loss_element[2]), tf.reduce_mean(loss_element[3])]
id = tf.argmin(loss_mean)
for i, ele in enumerate(loss_mean):
if i == id:
loss_mean[i] = tf.multiply(loss_mean[i], 1)
else:
loss_mean[i] = tf.multiply(loss_mean[i], alpha)
loss = tf.add(loss, tf.add(tf.add(tf.add(loss_mean[0],loss_mean[1]), loss_mean[2]), loss_mean[3]))
# Compute gradients
trainable_vars = self.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(y, y_pred_all)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
出现NaN渐变是因为您正在使用NumPy操作(例如,
np.sum
,np.reformate
,…),这会导致图形断开连接。相反,只需要使用tensorflow操作来实现逻辑
例如,可以实现评论部分中描述的权重,如下所示:
bce = tf.keras.losses.BinaryCrossentropy(reduction=tf.keras.losses.Reduction.NONE)
with tf.GradientTape() as tape:
# Compute element-wise losses
y_pred = self(x, training=True)
losses = bce(y, y_pred) # Shape=(bs, 4)
# Find maximum loss for each sample
idx_max = tf.argmax(losses, axis=-1) # Shape=(bs,)
idx_max_onehot = tf.one_hot(idx_max, depth=y.shape[-1]) # Shape=(bs, 4)
# Create weights tensor
weight_max = 1
weight_others = 0.1
weights = idx_max_onehot * weight_max + (1 - idx_max_onehot) * weight_others
# Aggregate losses
losses = tf.reduce_sum(weights * losses, axis=-1)
loss = tf.reduce_mean(losses)
您不应该使用NumPy操作(例如np.sum、np.REFORMATE等)-这会导致图形断开连接。请改为仅使用tensorflow操作。@rvinas您能建议在这里应该做些什么来解决这个问题吗。我对TF很陌生。所以,我没有关于特遣部队行动的信息。我在这里使用了NumPy运算,我需要对每个输出分支损失进行操作/加权。乍一看,很难理解如何对损失的每个元素进行加权。理想情况下,你应该有一个与
损失
形状相同的张量权重
(即(批量大小,nb\u元素)
),并用tf.reduce\u mean(权重*损失)
计算最终加权损失。理想情况下,应该避免梯度带块中的“for loop”。@rvinas实际上,我正在尝试实现一个,论文说我们将计算每个刻度/输出的采样损耗。在我的例子中,输出的数量是4。因此,无论哪一个损耗(4个输出损耗中)最小(对于一个样本),其权重为1,其余三个损耗的权重为0.1。例如,对于一个示例,如果输出_损耗=[2,5,1,3](列表中的4个元素表示对应于4个输出的4个损耗值),根据加权逻辑final_损耗=(2*0.1)+(5*0.1)+(1*1)+(3*0.1),我希望提供的解决方案有帮助-注意,我仅使用TF操作在“depth=a.shape[-1]”中的“a”?在这种情况下,深度是4,即输出元素的数量(a
应该是y
,抱歉)。此代码在以下位置给出错误:loss=tf.reduce_sum(weights*loss,axis=-1)tensorflow.python.framework.errors\u impl.InvalidArgumentError:不兼容的形状:[4,2,6]vs.[4,2256][Op:Mul]不是y
丢失的形状是(4,2,256)
?在(4,2,256)中,我猜2是批大小,4是输出数量,256是序列长度/时间帧数量,y(基本真理)是帧标签。因此,y的长度等于序列长度。