Keras 使图层在纪元结束时不可训练会改变下一个纪元的损耗

Keras 使图层在纪元结束时不可训练会改变下一个纪元的损耗,keras,layer,freeze,Keras,Layer,Freeze,我编写了一个自定义回调,以自动冻结我的多输入/输出网络中的层,以防相应的损失低于某个阈值 工作原理:在每个历元之后,它检查self.weights中第一个权重的损失是否低于self.loss\u阈值。如果是这样,它将冻结相应的层,并使底层层可训练。然后设置模型停止训练标志,以退出拟合功能并重新编译模型 from keras.callbacks import Callback class WeightLossCallback(Callback): def __init__(self, loss

我编写了一个自定义回调,以自动冻结我的多输入/输出网络中的层,以防相应的损失低于某个阈值

工作原理:在每个历元之后,它检查self.weights中第一个权重的损失是否低于self.loss\u阈值。如果是这样,它将冻结相应的层,并使底层层可训练。然后设置模型停止训练标志,以退出拟合功能并重新编译模型

from keras.callbacks import Callback

class WeightLossCallback(Callback):

def __init__(self, loss_weights):
    self.loss_weights = loss_weights
    self.weights = ["akVol","biltarres","riskap1","sraj","zbj"]
    self.loss_thresholds = {"akVol": 0.12, "biltarres": 0.12, "riskap1": 0.14, "sraj": 1.5, "zbj": 14.}
    self.learning_rate = {"akVol": 0.05, "biltarres": 0.05, "riskap1": 0.05, "sraj": 0.01, "zbj": 0.05}
    self.stopTraining=False

def on_train_begin(self, logs={}):
    K.set_value(self.model.optimizer.lr, self.learning_rate[self.weights[0]])

def on_epoch_end(self, epoch, logs={}):

    if epoch < 10:
        return

    def freezeAttAndLayer(weight):
        print("Setting weight " + weight + " to zero.")
        self.weights.remove(weight)
        self.loss_weights[weight] = 0.0
        self.stopTraining=True

        print("Step 2: Freezing corresponding layers.")        
        for layer in self.model.layers:
            if weight in layer.name:
                print("Freezing layer " + layer.name + ".")
                layer.trainable=False

    def unfreezeAttAndLayer(weight):
        print("Step 3: Unfreezing corresponding layers.")        
        self.loss_weights[nextWeight] = 1000.
        for layer in self.model.layers:
            if weight in layer.name:
                print("Unfreezing layer " + layer.name + ".")
                layer.trainable=True

    # in case of last weight in self.weights list
    if len(self.weights) < 2:
        lastWeight = self.weights[0]
        if logs[lastWeight+"_loss"] >= self.loss_thresholds[lastWeight]:
            return
        else:
            freezeAttAndLayer(lastWeight)
            self.model.stop_training=True
            return

    currentWeight = self.weights[0]
    nextWeight = self.weights[1]
    print("Step 1: Checking weight thresholds for " + currentWeight + " and " + nextWeight + "...")

    if logs[currentWeight+"_loss"] < self.loss_thresholds[currentWeight]:
        freezeAttAndLayer(currentWeight)
        unfreezeAttAndLayer(nextWeight)

    if self.stopTraining:
        self.stopTraining=False
        self.model.stop_training=True
        print(self.loss_weights)
        print(self.weights)
这里的问题是,在冻结层之前,sraj损耗低于阈值1.4885,但在冻结和重新编译网络后,损耗跳至2.5473,但保持不变

所以我的问题是,当我清楚地冻结所有层时,为什么会发生这种跳跃

谢谢你的帮助

from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
ModelCpt = ModelCheckpoint("C:/Users/pan11811/Desktop/ModelCheckpoint/test.h5", monitor="loss",save_best_only=True, save_weights_only=False)
WeightLossCpt = WeightLossCallback(loss_weights)

epochCount=0
while(i<21):
ReduceLRCpt = ReduceLROnPlateau(patience=35, min_delta=0.1, factor=0.6, monitor=WeightLossCpt.weights[0]+"_loss", verbose=1, min_lr=0.001)
model.fit(inputDic, outputDic, epochs=2000+epochCount,batch_size=4000, callbacks=[ReduceLRCpt,ModelCpt,WeightLossCpt], verbose=1)
   model.compile(optimizer=optimizers.Adamax(lr=0.025),loss=losses.mean_squared_error,loss_weights=Weig htLossCpt.loss_weights)
epochCount+=1    
if len(WeightLossCpt.weights) == 0:
    print("Training completed.")
    break

print("#######################################")
print("RECOMPILED_MODEL")
print("#######################################")
    Epoch 968/2003
    64538/64538 [==============================] - 0s 5us/step - loss: 1489.0058 -    tbaRenormalized_loss: 11.6664 - tbasum_loss: 180.4811 - vtstkj_loss: 0.0000e+00 - sraj_loss: 1.4885 - riskap1_loss: 0.1149 - biltarres_loss: 0.1144 - akVol_loss: 0.1210 - zbj_loss: 60316658.7545 - zb_loss: 18562112.8038 - sra_output_loss: 0.4971 - vtstk_loss: 0.0000e+00
Step 1: Checking weight thresholds for sraj and zbj...
Setting weight sraj to zero.
Step 2: Freezing corresponding layers.
Freezing layer input_sraj.
Freezing layer sraj_0.
Freezing layer sraj_1.
Freezing layer sraj_2.
Freezing layer sraj_3.
Freezing layer sraj_4.
Freezing layer sraj_5.
Freezing layer sraj.
Step 3: Unfreezing corresponding layers.
Unfreezing layer zbj.
{'tbaRenormalized': 0.0, 'tbasum': 0.0, 'sraj': 0.0, 'riskap1': 0.0, 'zb': 0.0, 'biltarres': 0.0, 'akVol': 0.0, 'vtstk': 0.0, 'zbj': 1000.0}
['zbj']

#######################################
RECOMPILED_MODEL
#######################################
Epoch 1/2004
64538/64538 [==============================] - 0s 6us/step - loss: 18151638696.1287 - tbaRenormalized_loss: 11.6664 - tbasum_loss: 180.4811 - vtstkj_loss: 0.0000e+00 - sraj_loss: 2.5473 - riskap1_loss: 0.1149 - biltarres_loss: 0.1144 - akVol_loss: 0.1210 - zbj_loss: 18151638.7103 -    zb_loss: 5467256.6113 - sra_output_loss: 0.9667 - vtstk_loss: 0.0000e+00