Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/332.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何使用两层基本CNN算法对剩余块进行编码;tensorflow.keras“;?_Python_Tensorflow_Keras_Deep Learning - Fatal编程技术网

Python 如何使用两层基本CNN算法对剩余块进行编码;tensorflow.keras“;?

Python 如何使用两层基本CNN算法对剩余块进行编码;tensorflow.keras“;?,python,tensorflow,keras,deep-learning,Python,Tensorflow,Keras,Deep Learning,我用tensorflow.keras库构建了一个基本的CNN模型代码: model = Sequential() # First Layer model.add(Conv2D(64, (3,3), input_shape = (IMG_SIZE,IMG_SIZE,1))) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size = (3,3))) # Second Layer model.add(Con

我用tensorflow.keras库构建了一个基本的CNN模型代码:

model = Sequential()

# First Layer
model.add(Conv2D(64, (3,3), input_shape = (IMG_SIZE,IMG_SIZE,1)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (3,3)))

# Second Layer
model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (3,3)))

# Third Layer
model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (3,3)))

# Fourth Layer
model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (3,3)))

# Fifth Layer
model.add(Conv2D(64, (3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size = (3,3)))

model.add(Flatten())

# Sixth Layer
model.add(Dense(64))
model.add(Activation("relu"))

# Seventh Layer
model.add(Dense(1))
model.add(Activation('sigmoid'))
现在,我想在第二层第四层之间建立连接,以使用tensorflow.keras库实现剩余块

因此,我应该如何修改代码以实现这样的剩余块?

架构的剩余块如下所示:

您需要使用,因为顺序模型太有限。其在Keras的实施情况如下:

from tensorflow.keras import layers

def resblock(x, kernelsize, filters):
    fx = layers.Conv2D(filters, kernelsize, activation='relu', padding='same')(x)
    fx = layers.BatchNormalization()(fx)
    fx = layers.Conv2D(filters, kernelsize, padding='same')(fx)
    out = layers.Add()([x,fx])
    out = layers.ReLU()(out)
    out = layers.BatchNormalization()(out)
    return out
batchnormalization()
图层不是必需的,但可能是提高其精度的可靠选项
x
还需要与
过滤器
参数具有相同数量的过滤器