Image segmentation 我应该如何训练我的图像进行图像分割

Image segmentation 我应该如何训练我的图像进行图像分割,image-segmentation,semantic-segmentation,Image Segmentation,Semantic Segmentation,我正在尝试一个图像分割任务,其中我需要从拼接部分定位原始图像,即有两类给定图像,图像的真实部分和图像的拼接部分。对于数据集,我使用CASIA v1.0数据集及其基本事实作为遮罩。我使用VGG-16模型作为FCN-8模型的主干。以下是模型的代码: model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(224,224, 3)) x = tf.keras.layers.Conv2

我正在尝试一个图像分割任务,其中我需要从拼接部分定位原始图像,即有两类给定图像,图像的真实部分和图像的拼接部分。对于数据集,我使用CASIA v1.0数据集及其基本事实作为遮罩。我使用VGG-16模型作为FCN-8模型的主干。以下是模型的代码:

model = tf.keras.applications.VGG16(include_top=False, weights='imagenet', input_shape=(224,224, 3))

x = tf.keras.layers.Conv2D(4096, (7,7),padding = "SAME", activation = "relu", name = "fc5")(model.layers[-1].output)

x = tf.keras.layers.Conv2D(4096, (7,7),padding = "SAME", activation = "relu", name = "fc6")(x)
x = tf.keras.layers.Conv2D(2, (1, 1),padding = "SAME", activation = "relu", name = "score_fr")(x)
Conv_size = x.shape[2] #16 if image size if 512
x = tf.keras.layers.Conv2DTranspose(2,kernel_size=(4,4),strides = (2,2),padding = "valid",activation=None,name = "score2")(x)
Deconv_size = x.shape[2]
Extra = (Deconv_size - 2*Conv_size)

x = tf.keras.layers.Cropping2D(cropping=((0,2),(0,2)))(x)
model1 = tf.keras.Model(inputs =model.input, outputs =[x]
skip_conv1 = tf.keras.layers.Conv2D(2, (1,1), padding="SAME", activation=None, name="score_pool4")      
summed=tf.keras.layers.Add()([skip_conv1(model1.layers[14].output),model1.layers[-1].output]))

x = tf.keras.layers.Conv2DTranspose(2,kernel_size=(4,4),strides = (2,2),padding = "valid",activation=None,name = "score4")(summed)
x = tf.keras.layers.Cropping2D(cropping=((0,2),(0,2)))(x)


skip_con2 = tf.keras.layers.Conv2D(2,kernel_size=(1,1),padding = "same",activation=None, name = "score_pool3")

Summed = tf.keras.layers.Add()([skip_con2(model.layers[10].output),x])

Up = tf.keras.layers.Conv2DTranspose(2,kernel_size=(16,16),strides = (8,8),
                     padding = "valid",activation = None,name = "upsample")(Summed)

final = tf.keras.layers.Cropping2D(cropping = ((0,8),(0,8)))(Up)
final_model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])
我的图像和遮罩位于单独的文件夹中,即用于培训的train和train_标签以及用于验证的val和val_标签。 现在,我使用ImageDataGenerator进行图像增强。这是代码

from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True)
        
val_datagen = ImageDataGenerator(rescale=1./255)
train_image_generator = train_datagen.flow_from_directory(final/train/",target_size=(224, 224),batch_size = 32 )

train_mask_generator = train_datagen.flow_from_directory("final/train_label/",target_size=(224, 224),
batch_size = 32)

val_image_generator = val_datagen.flow_from_directory("final/val/",target_size=(224, 224),
batch_size = 32)


val_mask_generator = val_datagen.flow_from_directory("final/val_label/",target_size=(224, 224),
batch_size = 32)



train_generator = (pair for pair in zip(train_image_generator, train_mask_generator))
val_generator = (pair for pair in zip(val_image_generator, val_mask_generator))
当我尝试使用生成器训练我的模型时,我得到的错误如下

model_history = final_model.fit(train_generator,epochs = 50,
                            steps_per_epoch = 23, 
                           validation_data = val_generator, 
                           validation_steps = 2)
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 1 array(s), but instead got the following list of 2 arrays: [array([[[[0., 0., 0.],
         [0., 0., 0.],
         [0., 0., 0.],
         ...,
         [0., 0., 0.],
         [0., 0., 0.],
         [0., 0., 0.]],

        [[0., 0., 0.],
         [0., 0., 0.],...
你能帮我一下吗,我对图像分割还不熟悉,所以欢迎任何建议