Python 拟合比上一历元损失高一个数量级后手动计算的损失

Python 拟合比上一历元损失高一个数量级后手动计算的损失,python,machine-learning,keras,Python,Machine Learning,Keras,我有下面的神经网络 def customLoss(yTrue,yPred): loss_value = np.divide(abs(yTrue - yPred) , yTrue) loss_value = tf.reduce_mean(loss_value) return loss_value def model(inp_size): inp = Input(shape=(inp_size,)) x1 = Dense(100, activation='re

我有下面的神经网络

def customLoss(yTrue,yPred):
    loss_value = np.divide(abs(yTrue - yPred) , yTrue)
    loss_value = tf.reduce_mean(loss_value)
    return loss_value

def model(inp_size):

   inp = Input(shape=(inp_size,))
   x1 = Dense(100, activation='relu')((inp))
   x1 = Dense(50, activation='relu')(x1)
   x1 = Dense(20, activation='relu')(x1)
   x1 = Dense(1, activation = 'linear')(x1)

    x2 = Dense(100, activation='relu')(inp)
    x2 = Dense(50, activation='relu')(x2)
    x2 = Dense(20, activation='relu')(x2)
    x2 = Dense(1, activation = 'linear')(x2)

    x3 = Dense(100, activation='relu')(inp)
    x3 = Dense(50, activation='relu')(x3)
    x3 = Dense(20, activation='relu')(x3)
    x3 = Dense(1, activation = 'linear')(x3)

    x4 = Dense(100, activation='relu')(inp)
    x4 = Dense(50, activation='relu')(x4)
    x4 = Dense(20, activation='relu')(x4)
    x4 = Dense(1, activation = 'linear')(x4)



    x1 = Lambda(lambda x: x * baseline[0])(x1)
    x2 = Lambda(lambda x: x * baseline[1])(x2)
    x3 = Lambda(lambda x: x * baseline[2])(x3)
    x4 = Lambda(lambda x: x * baseline[3])(x4)

    out = Add()([x1, x2, x3, x4])

    return Model(inputs = inp, outputs = out)
y_train=y_train.astype('float32')
y_test=y_test.astype('float32')



NN_model = Sequential()
NN_model = model(X_train.shape[1])
NN_model.compile(loss=customLoss, optimizer= 'Adamax', metrics=    [customLoss])

NN_model.fit(X_train, y_train, epochs=500,verbose = 1)
train_predictions = NN_model.predict(X_train)


predictions = NN_model.predict(X_test)
MAE  = customLoss (y_test, predictions)
最后一个输出是 3663/3663[=================================]-0s 103us/步-损耗:0.0055-自定义损耗:0.0055

然而,当我打印 客户损失(y列车,列车预测)

我得到0.06469738

我读到过,训练中的损失是整个时代的平均值,但最终结果肯定不会更糟,肯定不会有一个数量级的不同? 我对keras比较陌生,所以任何建议都非常感谢
谢谢

事实证明,训练预测的形状是(3000,1)和y_序列(3000,) 列车预测=NN\U模型。预测(X\U列车)。展平()


解决了问题

事实证明,训练预测的形状是(3000,1)和y_序列(3000,) 列车预测=NN\U模型。预测(X\U列车)。展平()

解决了这个问题