Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/batch-file/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Keras metric不提供与回调中计算的metric相同的结果_Python_Deep Learning_Keras - Fatal编程技术网

Python Keras metric不提供与回调中计算的metric相同的结果

Python Keras metric不提供与回调中计算的metric相同的结果,python,deep-learning,keras,Python,Deep Learning,Keras,我尝试使用预训练的vgg16网络进行回归。作为损失和度量,我选择了平均绝对误差。我想检查一下这个分数是否正确,然后自己在回调中实现了平均绝对分数。但是,结果与输出结果不同: Training MAE:126.649451276 Epoch 1/100 638/638 [==============================] - 406s - loss: 38.9601 - mean_absolute_error: 38.9601 Training MAE:40.7683742351 E

我尝试使用预训练的vgg16网络进行回归。作为损失和度量,我选择了平均绝对误差。我想检查一下这个分数是否正确,然后自己在回调中实现了平均绝对分数。但是,结果与输出结果不同:

Training MAE:126.649451276
Epoch 1/100
638/638 [==============================] - 406s - loss: 38.9601 - mean_absolute_error: 38.9601
Training MAE:40.7683742351

Epoch 2/100
638/638 [==============================] - 362s - loss: 19.8719 - mean_absolute_error: 19.8719
Training MAE:43.2516028945
训练MAE应与上述历元中的损失或平均绝对误差相同(或至少几乎相同)。对于第一个时代,这是可以的。在第二个时代,情况并非如此。MAE为43.24,但损失为19.87,keras提供的平均绝对误差为19.87

我已经清理了我的代码,并试图找到原因,但我找不到它。为什么会这样

我的代码:

from keras.layers.core import Flatten, Dense, Dropout
import numpy as np
from keras.preprocessing import image
from keras.applications.vgg16 import VGG16
from keras import optimizers
from keras.models import Model
import os
from keras.layers.core import *
from keras.callbacks import Callback, ModelCheckpoint

os.environ["CUDA_VISIBLE_DEVICES"]="2"
model_checkpoints = "/home/usr/PycharmProjects/RSNA/model_checkpoints/model2.hdf5"
data_dir = "/home/usr/PycharmProjects/RSNA/data/"
data_training = "dataset/training"
training_images = "boneage-training-dataset/"
training_gt = "training_gt/"
n_batch = 16
n_training_samples = 10213
n_validation_samples = 1136
n_testing_samples = 1262

def mae(X,y,mdl):
    pred = mdl.predict(X)
    gt = y
    return str(np.mean(np.abs(np.array(gt)-np.array(pred))))

class LossHistory(Callback):
    def on_epoch_begin(self, epoch, logs={}):
        mae_score = mae(X_train,y_train,self.model)
        print "Training MAE:" + mae_score


def regression_flow_from_directory(flow_from_directory_gen, rev_indices):
    for x, y in flow_from_directory_gen:
         yield x,  [float(rev_indices[val]) for val in y]

if __name__ == '__main__':

    width = 224
    height = 224
    X_train = []
    y_train = []

    train_datagen = image.ImageDataGenerator(
        rescale=1./255,
        width_shift_range=0.2,
        height_shift_range= 0.2,
    )

    train_generator = train_datagen.flow_from_directory(
        data_dir+data_training,
        target_size=(width, height),
        batch_size=n_batch,
        color_mode='rgb',
        class_mode='sparse',
        seed=42)


    indices = train_generator.class_indices
    rev_indices = dict((v,k) for k, v in indices.iteritems())

    train_generator = regression_flow_from_directory(train_generator,rev_indices)

    i = 0
    print "Epcohs: " + str(n_training_samples//n_batch)
    for x,y in train_generator:
        if i <= n_training_samples//n_batch:
            X_train.extend(x)
            y_train.extend(y)
            i +=1
        else:
            break;

    print "Maximum: " + str(np.max(y_train))

    X_train = np.array(X_train)
    print X_train.shape

    model = VGG16(weights='imagenet', include_top=False,input_shape = (224, 224, 3))
    last = model.output
    x = Flatten(name='flatten')(last)
    x = Dense(4096, activation='relu', name='fc1')(x)
    x = Dropout(0.5, noise_shape=None, seed=None)(x)
    x = Dense(4096, activation='relu', name='fc2')(x)
    x = Dense(1, activation='relu', name='predictions')(x)

    my_model = Model(input=model.input, output=x)
    my_model.compile(loss="mae", optimizer=optimizers.SGD(lr=0.00001, momentum=0.9),
                        metrics=["mae"])

    history = LossHistory()
    print my_model.summary()
    print n_validation_samples//n_batch
    my_model.fit_generator(
        train_generator,
        steps_per_epoch=n_training_samples//n_batch,
        epochs=100,
        callbacks=[history],
    )
从keras.layers.core导入扁平、密集、脱落
将numpy作为np导入
从keras.preprocessing导入图像
从keras.applications.vgg16导入vgg16
来自keras导入优化器
从keras.models导入模型
导入操作系统
从keras.layers.core导入*
从keras.callbacks导入回调,ModelCheckpoint
操作系统环境[“CUDA_可视设备”]=“2”
model_checkpoints=“/home/usr/PycharmProjects/RSNA/model_checkpoints/model2.hdf5”
data_dir=“/home/usr/pycharm项目/RSNA/data/”
data\u training=“数据集/培训”
training_images=“boneage training dataset/”
training\u gt=“training\u gt/”
n_批次=16
n_训练样本=10213
n_验证_样本=1136
n_测试_样本=1262
def mae(X、y、mdl):
pred=mdl.predict(X)
gt=y
返回str(np.mean(np.abs(np.array(gt)-np.array(pred)))
类丢失存储(回调):
def on_epoch_begin(self、epoch、logs={}):
mae_分数=mae(X_序列、y_序列、自我模型)
打印“培训MAE:+MAE_分数”
来自目录的定义回归流(来自目录的流,版本索引):
对于来自目录\u gen的流\u中的x,y:
收益率x,[浮动(修订指数[val]),用于y中的val]
如果uuuu name uuuuuu='\uuuuuuu main\uuuuuuu':
宽度=224
高度=224
X_列车=[]
y_train=[]
列车\数据发生器=image.ImageDataGenerator(
重新缩放=1./255,
宽度\偏移\范围=0.2,
高度\位移\范围=0.2,
)
train_generator=来自目录的train_datagen.flow_(
数据目录+数据目录培训,
目标尺寸=(宽度、高度),
批次大小=n个批次,
颜色模式='rgb',
class_mode='sparse',
种子=42)
指数=系列发电机等级指数
rev_index=dict((v,k)表示index.iteritems()中的k,v)
列车发电机=来自列车目录的回归列车流量(列车发电机,版本指数)
i=0
打印“Epcohs:+str(n_培训样本//n_批次)
对于x、y列发电机:

如果我认为你的平均绝对误差的实现与Keras’完全相同。但你似乎是在计算X\U列车,y\U列车,而Keras是在缩放和移动版本(见列车生成器)上计算的,因此我只能看到差异。您是否可以尝试将损失函数直接提供给
模型。使用
损失(y\u true,y\u pred)
签名编译
,并查看它是否提供与Keras mae metric相同的结果?