Python 为什么来自图像分割网络的预测掩码被翻转?

Python 为什么来自图像分割网络的预测掩码被翻转?,python,tensorflow,image-segmentation,Python,Tensorflow,Image Segmentation,为了从汽车图像中预测车牌位置,我训练了一个unet网络,下面是显示我的图像的代码 path = "/home/mimus/apifave/images/saved_masks/plate_loc/images/original_2_14.jpg" img = load_img(path, target_size=(640,368), color_mode="rgb") img = tf.keras.preprocessing.image.img_to_ar

为了从汽车图像中预测车牌位置,我训练了一个unet网络,下面是显示我的图像的代码

path = "/home/mimus/apifave/images/saved_masks/plate_loc/images/original_2_14.jpg"
img = load_img(path, target_size=(640,368), color_mode="rgb")
img = tf.keras.preprocessing.image.img_to_array(img)

val_preds = model.predict(img[tf.newaxis, ...])


def display_mask(i):
    """Quick utility to display a model's prediction."""
    mask = np.argmax(val_preds[i], axis=-1)
    mask = np.expand_dims(mask, axis=-1)
    img = PIL.ImageOps.autocontrast(keras.preprocessing.image.array_to_img(mask))
    img  = ImageOps.invert(img)
    img = ImageOps.mirror(img)
    display(img)
    img = img.save("mask.jpg") 
display(Image(filename=path))
display_mask(0)    
我用大约.9935的acc对模型进行了训练,这是一个包含7500张图像和遮罩的数据集,我用regionprops等略读方法生成了这些图像和遮罩。现在这是要预测的图像

这是面具

这是我的模型

def get_model(img_size, num_classes):
    inputs = keras.Input(shape=img_size + (3,))

    ### [First half of the network: downsampling inputs] ###

    # Entry block
    x = layers.Conv2D(64, 3, strides=2, padding="same")(inputs)
    x = layers.BatchNormalization()(x)
    x = layers.Activation("relu")(x)

    previous_block_activation = x  # Set aside residual

    # Blocks 1, 2, 3 are identical apart from the feature depth.
    for filters in [64, 128, 256]:
        x = layers.Activation("relu")(x)
        x = layers.SeparableConv2D(filters, 3, padding="same")(x)
        x = layers.BatchNormalization()(x)

        x = layers.Activation("relu")(x)
        x = layers.SeparableConv2D(filters, 3, padding="same")(x)
        x = layers.BatchNormalization()(x)

        x = layers.MaxPooling2D(3, strides=2, padding="same")(x)

        # Project residual
        residual = layers.Conv2D(filters, 1, strides=2, padding="same")(
            previous_block_activation
        )
        x = layers.add([x, residual])  # Add back residual
        previous_block_activation = x  # Set aside next residual

    ### [Second half of the network: upsampling inputs] ###

    for filters in [256, 128, 64, 32]:
        x = layers.Activation("relu")(x)
        x = layers.Conv2DTranspose(filters, 3, padding="same")(x)
        x = layers.BatchNormalization()(x)

        x = layers.Activation("relu")(x)
        x = layers.Conv2DTranspose(filters, 3, padding="same")(x)
        x = layers.BatchNormalization()(x)

        x = layers.UpSampling2D(2)(x)

        # Project residual
        residual = layers.UpSampling2D(2)(previous_block_activation)
        residual = layers.Conv2D(filters, 1, padding="same")(residual)
        x = layers.add([x, residual])  # Add back residual
        previous_block_activation = x  # Set aside next residual

    # Add a per-pixel classification layer
    outputs = layers.Conv2D(num_classes, 3, activation="softmax", padding="same")(x)

    # Define the model
    model = keras.Model(inputs, outputs)
    return model


# Free up RAM in case the model definition cells were run multiple times
keras.backend.clear_session()

# Build model
model = get_model(img_size, num_classes)
这和我的模型有关吗?或者为什么会这样