Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/vb.net/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 具有不同输入的集合模型(预计将看到2个阵列)_Python_Machine Learning_Keras_Deep Learning_Ensemble Learning - Fatal编程技术网

Python 具有不同输入的集合模型(预计将看到2个阵列)

Python 具有不同输入的集合模型(预计将看到2个阵列),python,machine-learning,keras,deep-learning,ensemble-learning,Python,Machine Learning,Keras,Deep Learning,Ensemble Learning,我已经训练了两个模特 第一个模型是UNet: print(model_unet.summary()) __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to

我已经训练了两个模特

第一个模型是UNet:

print(model_unet.summary())

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_4 (InputLayer)            (None, 128, 128, 1)  0                                            
__________________________________________________________________________________________________
conv2d_26 (Conv2D)              (None, 128, 128, 32) 320         input_4[0][0]                    
__________________________________________________________________________________________________
conv2d_27 (Conv2D)              (None, 128, 128, 32) 9248        conv2d_26[0][0]  
.....
.....
conv2d_44 (Conv2D)              (None, 128, 128, 1)  33          zero_padding2d_4[0][0]           
==================================================================================================
Total params: 7,846,081
Trainable params: 7,846,081
Non-trainable params: 0
第二个是ResNet:

print(model_resnet.summary())

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_3 (InputLayer)            (None, 128, 128, 3)  0                                            
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D)       (None, 134, 134, 3)  0           input_3[0][0]                    
....
....
conv2d_25 (Conv2D)              (None, 128, 128, 3)  99          zero_padding2d_3[0][0]           
==================================================================================================
Total params: 24,186,915
Trainable params: 24,133,795
Non-trainable params: 53,120
UNet有1个通道(灰色),ResNet有3个通道

然后,我尝试创建一个集成模型:

def ensemble(models, models_input):

    outputs = [model(models_input[idx]) for idx, model in enumerate(models)]
    x = Average()(outputs)

    model_inputs = [model for model in models_input]
    model = Model(model_inputs, x)

    return model

models = [model_unet, model_resnet]
models_input = [Input((128,128,1)), Input((128,128, 3))]

ensemble_model = ensemble(models, models_input)
当我试图预测验证数据时:

pred_val = ensemble_model.predict(X_val)
我收到错误信息:

Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.46755977],
         [0.52268691],
         [0.52766109],
         ....

X_val.shape is : (800, 128, 128, 1)

我认为问题在于通道,但我不知道如何克服这一点。

如果您的训练数据是灰度图像,并且考虑到您的ResNet模型将RGB图像作为输入,那么您应该问问自己,您希望如何从灰度变为RGB?一个答案是重复灰度图像3次以获得RBG图像。然后,您可以使用一个输入层轻松定义一个模型,该输入层将获取灰度图像,并根据您定义的模型将其输入:

from keras import backend as K

input_image = Input(shape=(128,128,1))

unet_out = model_unet(input_image)
rgb_image = Lambda(lambda x: K.repeat_elements(x, 3, -1))(input_image)
resnet_out = model_resnet(rgb_image)

output = Average()([unet_out, resnet_out])

ensemble_model = Model(input_image, output)
然后,您可以使用one输入数组轻松调用
predict

pred_val = ensemble_model.predict(X_val)
此解决方案的一个替代方案是使用您在问题中使用的解决方案。但是,首先需要将图像从灰度转换为RGB,然后将两个数组传递到
predict
方法:

X_val_rgb = np.repeat(X_val, 3, -1)

pred_val = ensemble_model.predict([X_val, X_val_rgb])

你能试着使用
pred\u val=employee\u model.predict(X\u val[0])
@Bazingaa:它给出了完全相同的错误。@George你为模型定义了两个输入层,但你只给了它一个输入数组。它应该如何为与resnet相关的另一个提供信息?@today:嗯……你是说我必须使用这个吗<代码>集合模型。预测([X_val,X_val])。那么,对于每个模型,给出数组?我不知道。但是,它是有效的!您只需更改X_val即可拥有3个频道@今天:如果你想回答,请回答!谢谢好的,谢谢!但是,如果我尝试使用
np.repeat
它会给我内存错误(18000是测试数据的大小)。我尝试使用
tf.image.grayscale\u to\u rgb(X\u测试,name=None)
但还是一样。我也看了,但还是一样。你认为
K.repeat\u elements
内存效率更高吗?@George
K.repeat\u elements
用于模型内部。如果要使用自己的解决方案,则需要在将数据馈送到模型之前准备数据。要么增加内存大小,要么成批转换图像,然后使用进行预测。您还可以定义生成器并改用。不过,我会选择我自己建议的解决方案,因为我认为有一个输入层更符合逻辑。好的,非常感谢!我脑子里有一个预测发生器,我会试试的。(upv)