Keras 验证损失的值不会达到零(它会减少或增加),而训练损失的值会减少

Keras 验证损失的值不会达到零(它会减少或增加),而训练损失的值会减少,keras,densenet,Keras,Densenet,我用Keras创建了DenseNet-121 纪元的数目是400 验证损失的值不会达到零(它会减少或增加),而训练损失的值会减少 你能帮我吗 结果是: 纪元1/400 125/125[=====================================================================-21s 165ms/步长-损耗:1.9974-附件:0.5028-val_损耗:1.5273-val_附件:0.4073 纪元2/400 125/125[===========

我用Keras创建了DenseNet-121

纪元的数目是400

验证损失的值不会达到零(它会减少或增加),而训练损失的值会减少

你能帮我吗

结果是:

纪元1/400 125/125[=====================================================================-21s 165ms/步长-损耗:1.9974-附件:0.5028-val_损耗:1.5273-val_附件:0.4073

纪元2/400 125/125[===========================================================================-19s 151ms/步-损耗:0.9303-附件:0.5325-val_损耗:1.4413-val_附件:0.3567

纪元3/400 125/125[===========================================================================-19s 151ms/步-损耗:0.8965-附件:0.5450-val_损耗:1.6430-val_附件:0.3587

纪元4/400 125/125[===========================================================================-19s 151ms/步-损耗:0.8662-附件:0.5561-val_损耗:1.5824-val_附件:0.4173

                           ............
纪元397/400 125/125[==================================================================19s 155ms/步-损耗:0.0129-附件:0.9994-val_损耗:0.9157-val_附件:0.8402

纪元398/400 125/125[=====================================================================-19s 155ms/步-损耗:0.0116-附件:0.9996-val_损耗:1.0938-val_附件:0.7956

纪元399/400 125/125[==============================================================================19s 154ms/步-损耗:0.0123-附件:0.9995-val_损耗:1.2887-val_附件:0.7761

纪元400/400 125/125[==================================================================19s 155ms/步-损耗:0.0124-附件:0.9992-val_损耗:1.1007-val_附件:0.8111

n_classes = 3
def build_model():
    base_model = densenet.DenseNet121(input_shape= (128, 128, 3),
                                     weights=None, 
                                     include_top=True,
                                     pooling='avg', classes=3,)
    for layer in base_model.layers:
        layer.trainable = True 
    x = base_model.output
    x = Dense(1024, kernel_regularizer=regularizers.l1_l2(0.00001), activity_regularizer=regularizers.l2(0.00001))(x)
    x = Activation('relu')(x)
    x = Dense(512, kernel_regularizer=regularizers.l1_l2(0.00001), activity_regularizer=regularizers.l2(0.00001))(x)
    x = Activation('relu')(x)
    predictions = Dense(n_classes, activation='softmax')(x)
    model = Model(inputs=base_model.input, outputs=predictions)
    return model

model = build_model()
keras.optimizers.Adam(learning_rate=0.001)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
early_stop = EarlyStopping(monitor='val_loss', patience=8, verbose=2, min_delta=1e-3)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=4, verbose=1, min_delta=1e-3)
callbacks_list = [early_stop, reduce_lr]
print(" Build model --- %s seconds ---" % (time.time() - start_time))
print('###################### training step #############')
trainy = keras.utils.to_categorical(trainy)
yvalidation = keras.utils.to_categorical(yvalidation)
with tf.device('/device:GPU:0'):
  model_history = model.fit(trainx, trainy,
          validation_data=(xvalidation, yvalidation),
          batch_size=68, 
          epochs=400,
          verbose=1)```