Keras后端Tensorflow
我必须设计一个以RGB为输入并产生RGB输出的keras模型。我必须为R、G和B设计三个平行层,如图所示 现在我的问题是如何将RGB图像分割成R、G、B,并作为CNN三个平行层的输入。有人能帮我吗Keras后端Tensorflow,tensorflow,keras,Tensorflow,Keras,我必须设计一个以RGB为输入并产生RGB输出的keras模型。我必须为R、G和B设计三个平行层,如图所示 现在我的问题是如何将RGB图像分割成R、G、B,并作为CNN三个平行层的输入。有人能帮我吗 from __future__ import print_function import keras from keras.utils import plot_model from keras import backend as K from keras.models import Sequenti
from __future__ import print_function
import keras
from keras.utils import plot_model
from keras import backend as K
from keras.models import Sequential, Model
from keras.layers import Dense, Activation
from keras.layers import Conv2D, MaxPooling2D, Input, concatenate,
ZeroPadding2D, merge, add
import tensorflow as tf
from keras.models import load_model
from keras import optimizers
from keras import losses
from keras.optimizers import SGD, Adam
from keras.callbacks import ModelCheckpoint
visible = Input(shape=(64,64,3))
R = visible[:][:][:][0]
G = visible[:][:][:][1]
B = visible[:][:][:][2]
#red, green, blue = tf.split(3, 3, visible)
# first feature extractor
#conv1_1 = Conv2D(32, kernel_size=3, padding='same',
#kernel_initializer='he_normal')(visible)
conv1_1 = Conv2D(32, kernel_size=3, padding='same',
kernel_initializer='he_normal')(R)
conv1_1 = Activation('relu')(conv1_1)
conv2_1 = Conv2D(32, kernel_size=3, padding='same',
kernel_initializer='he_normal')(conv1_1)
conv2_1 = Activation('relu')(conv2_1)
conv3_1= Conv2D(32, kernel_size=3, padding='same',
kernel_initializer='he_normal')(conv2_1)
conv3_1 = Activation('relu')(conv3_1)
#conv1_2 = Conv2D(32, kernel_size=3, padding='same',
#kernel_initializer='he_normal')(visible)
conv1_2 = Conv2D(32, kernel_size=3, padding='same',
kernel_initializer='he_normal')(G)
conv1_2 = Activation('relu')(conv1_2)
conv2_2 = Conv2D(32, kernel_size=3, padding='same',
kernel_initializer='he_normal')(conv1_2)
conv2_2 = Activation('relu')(conv2_2)
conv3_2= Conv2D(32, kernel_size=3, padding='same',
kernel_initializer='he_normal')(conv2_2)
conv3_2 = Activation('relu')(conv3_2)
#conv1_3 = Conv2D(32, kernel_size=3, padding='same',
#kernel_initializer='he_normal')(visible)
conv1_3 = Conv2D(32, kernel_size=3, padding='same',
kernel_initializer='he_normal')(B)
conv1_3 = Activation('relu')(conv1_3)
conv2_3 = Conv2D(32, kernel_size=3, padding='same',
kernel_initializer='he_normal')(conv1_3)
conv2_3 = Activation('relu')(conv2_3)
conv3_3= Conv2D(32, kernel_size=3, padding='same',
kernel_initializer='he_normal')(conv2_3)
conv3_3 = Activation('relu')(conv3_3)
merge = concatenate([conv3_1, conv3_2, conv3_3])
model = Model(inputs=visible, outputs=merge)
# summarize layers
print(model.summary())
# plot graph
plot_model(model, to_file='shared_input_layer.png')
我想将“可见”拆分为R、G、B,并将其作为输入提供给conv1_1、conv1_2和conv1_3。我想将图层添加到分割RGB并自动作为输入提供对于多输入、多输出模型,使用函数api: 要合并conv层的结果,可以使用keras“连接层”
将RGB图像输入到模型时,实际上输入的是大小为
(高度、宽度、3)
的张量,其中3
表示3个通道(红色、绿色、黄色)
您可以通过以下方式分离通道:
b, g, r = image_array[:, :, 0], image_array[:, :, 1], image_array[:, :, 2]
只需确保通道正确对齐(如果存在alpha通道,请小心移除该通道)
你也可以使用它,这将使它更容易处理图像
import cv2
b, g, r = cv2.split(image_array)
如果三个网络不同:
visible = Input((64,64,3))
RGB = Lambda(lambda x: tf.split(x, 3, axis=-1))(visible)
net1 = Conv2D(....)(RGB[0])
net1 = Activation(....)(net1)
net1 = Conv2D(....)(net1)
net1 = Activatoin(....)(net1)
net2 = Conv2D(....)(RGB[1])
....
net3 = Conv2D(....)(RGB[2])
.....
joined = Concatenate()([net1,net2,net3])
model = Model(visible, joined)
如果三个网络相同:
visible = Input((64,64,3))
out = Lambda(lambda x: K.permute_dimensions(x,(0,3,1,2)))(visible)
out = Reshape((3,64,64,1))(out)
out = TimeDistributed(Conv2D(...))(out)
out = TimeDistributed(Activation(...))(out)
out = TimeDistributed(Conv2D(...))(out)
....
out = Reshape((3,64,64))(out)
out = Lambda(lambda x: K.permute_dimensions(x, (0,2,3,1)))(out)
model = Model(visible,out)
谢谢,但R、G和B仍然是三维的可见=输入(形状=(64,64,3))>>>R=可见[:][:][:][0]>>R.形状张量形状([维度(64),维度(64),维度(3)])>>是的,它解决了我的问题。。。是否可以像使用Lambda层拆分一样,以类似的方式组合三个网络的输出???。我使用连接函数进行组合。它是。但是串联更容易使用。一个
Lambda
层只是一个自定义层,可以使用您想要的任何功能。
visible = Input((64,64,3))
out = Lambda(lambda x: K.permute_dimensions(x,(0,3,1,2)))(visible)
out = Reshape((3,64,64,1))(out)
out = TimeDistributed(Conv2D(...))(out)
out = TimeDistributed(Activation(...))(out)
out = TimeDistributed(Conv2D(...))(out)
....
out = Reshape((3,64,64))(out)
out = Lambda(lambda x: K.permute_dimensions(x, (0,2,3,1)))(out)
model = Model(visible,out)