Python 如何使用分割图像数据集从predict_生成器输出生成分类报告和混淆矩阵?(Keras Tensorflow)

Python 如何使用分割图像数据集从predict_生成器输出生成分类报告和混淆矩阵?(Keras Tensorflow),python,keras,deep-learning,Python,Keras,Deep Learning,我已经为ImageDataGenerator创建了一个分类器,使用加载我的数据集的目录中的flow\u创建,然后我执行模型和预测的训练 我的问题是如何从分类器的输出中获取度量(即acc、召回、FPR等)。预测\u生成器? 如果我没有弄错的话,使用混淆矩阵和分类报告的方法会有很大帮助。图像(.tif文件)可在中找到 /data/test/image ---> RGB images /data/test/label ---> Binary mask images /data/train

我已经为ImageDataGenerator创建了一个分类器,使用加载我的数据集的目录中的
flow\u创建,然后我执行模型和预测的训练

我的问题是如何从分类器的输出中获取度量(即acc、召回、FPR等)。预测\u生成器?

如果我没有弄错的话,使用混淆矩阵和分类报告的方法会有很大帮助。图像(.tif文件)可在中找到

/data/test/image ---> RGB images 
/data/test/label ---> Binary mask images
/data/train/image ---> RGB images 
/data/train/label ---> Binary mask images
图像如下所示: . predict_generator方法返回如下图像:

我已经尝试过以下代码来生成混淆矩阵,但工作不正常:

predicted_classes_indices = np.argmax(results,axis=1)
labels = (image_generator.class_indices)
labels = dict((v, k) for k, v in labels.items())
predictions = [labels[k] for k in predicted_classes_indices]
cm = confusion_matrix(labels, predicted_classes_indices)
所有代码:

from redeUnet import get_unet
import matplotlib.pyplot as plt
import numpy as np
import os
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.models import model_from_json
from tensorflow.python.keras.callbacks import EarlyStopping
from tensorflow.python.keras.callbacks import ModelCheckpoint

PATH_TRAIN = "..\\data\\train\\"

btSize = 4
alt = 256 # image row
larg = 256 # image col
image_folder = 'image'
mask_folder = 'label'
image_color_mode = 'rgb'
mask_color_mode = 'grayscale'
clMode =  'None' #'binary'
epocas = 5 
qtdPatience = 60

'''
Data augmentation
'''
data_gen_args = dict(featurewise_center=False,
                     samplewise_center=False,  # set each sample mean to 0
                     featurewise_std_normalization=False,  # divide inputs by std of the dataset
                     samplewise_std_normalization=False,  # divide each input by its std
                     zca_whitening=False,  # apply ZCA whitening
                     rotation_range=40,  # randomly rotate images in the range (degrees, 0 to 180)
                     zoom_range = 0.2, # Randomly zoom image 
                     width_shift_range=0.2,  # randomly shift images horizontally (fraction of total width)
                     height_shift_range=0.2,  # randomly shift images vertically (fraction of total height)
                     horizontal_flip=True,  # randomly flip images
                     vertical_flip=False,
                     rescale=1./255,
                     validation_split = 0.2)  # randomly flip images
train_image_datagen = ImageDataGenerator(**data_gen_args)
train_mask_datagen = ImageDataGenerator(**data_gen_args)

'''
DATASET prepare and load (20% Validation)
'''
# Load RGB images TRAINING
image_generator = train_image_datagen.flow_from_directory(PATH_TRAIN, 
                                                          classes = [image_folder],
                                                          class_mode = None,
                                                          color_mode = image_color_mode,
                                                          target_size = (larg, alt),
                                                          batch_size = btSize,
                                                          save_to_dir = None,
                                                          shuffle = False,
                                                          subset = 'training',
                                                          seed = 1)
# Load BINARY (Mask) images TRAINING
mask_generator = train_mask_datagen.flow_from_directory(PATH_TRAIN, 
                                                          classes = [mask_folder],
                                                          class_mode = None,
                                                          color_mode = mask_color_mode,
                                                          target_size = (larg, alt),
                                                          batch_size = btSize,
                                                          save_to_dir = None,
                                                          shuffle = False,
                                                          subset = 'training',
                                                          seed = 1)

train_generator = zip(image_generator, mask_generator)

#-------------------------------------------------
# VALIDATION images RGB
valid_image_generator = train_image_datagen.flow_from_directory(PATH_TRAIN, 
                                                          classes = [image_folder],
                                                          class_mode = None,
                                                          color_mode = image_color_mode,
                                                          target_size = (larg, alt),
                                                          batch_size = btSize,
                                                          save_to_dir = None,
                                                          shuffle = False,
                                                          subset = 'validation',
                                                          seed = 1)

# VALIDATION images BINARY (Mask)
valid_mask_generator = train_mask_datagen.flow_from_directory(PATH_TRAIN, 
                                                          classes = [mask_folder],
                                                          class_mode = None,
                                                          color_mode = mask_color_mode,
                                                          target_size = (larg, alt),
                                                          batch_size = btSize,
                                                          save_to_dir = None,
                                                          shuffle = False,
                                                          subset = 'validation',
                                                          seed = 1)
valid_generator = zip(valid_image_generator, valid_mask_generator)
#-------------------------------------------------

'''
RUN TRAINING
'''
# Get UNET
classificador = get_unet(larg, alt, 3)
classificador.compile(optimizer = Adam(lr = 1e-4), loss = 'binary_crossentropy', metrics = ['accuracy']) #metrics = ['accuracy', minhaMetrica])

# Salvando o Modelo e Pesos (Best Model, StopEarly)
es = EarlyStopping(monitor = 'val_loss', mode = 'min', verbose = 1, patience = qtdPatience)
mc = ModelCheckpoint('best_polyp_unet_model.h5', monitor = 'val_loss', verbose = 1, save_best_only = True)


history = classificador.fit_generator(train_generator, 
                            steps_per_epoch = image_generator.n // btSize,
                            validation_data = valid_generator,
                            validation_steps = valid_image_generator.n // btSize,
                            epochs = epocas, callbacks=[es, mc])

resultados = classificador.predict_generator(valid_generator, 
                                           steps = valid_image_generator.n,
                                           verbose = 1)
#-------------------------------------------------

#HOW TO GET THE METRICS?

predicted_classes_indices = np.argmax(resultados,axis=1)
labels = (image_generator.class_indices)
labels = dict((v, k) for k, v in labels.items())
predictions = [labels[k] for k in predicted_classes_indices]
cm = confusion_matrix(ground_truth, predicted_classes)

#-------------------------------------------------

我在这行收到一条错误消息:
predictions=[为预测的类索引中的k标记[k]

错误:不可损坏的类型:“numpy.ndarray”

当我通过运行以下命令检查prediction的输出变量(“resultados”)时:
resultados.shape
。显示:

(480, 256, 256, 1)
U-net预测生成了480幅图像

但我如何将这些信息转换为与“混乱矩阵”或“分类报告”匹配的信息呢?我认为这更难,因为这是一个细分问题


如果您有任何建议,我们将不胜感激。

您需要将您的预测和基本事实(y):

然后可以在展平矩阵上运行分类报告和混淆矩阵

from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix


print('Train report', classification_report(y_flat, predictions_flat))
print('Train conf matrix', confusion_matrix(y_flat, predictions_flat))

你解决了这个问题吗?
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix


print('Train report', classification_report(y_flat, predictions_flat))
print('Train conf matrix', confusion_matrix(y_flat, predictions_flat))