Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/290.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将CNN的输出传递给BILSTM_Python_Tensorflow_Keras_Lstm_Cnn - Fatal编程技术网

Python 将CNN的输出传递给BILSTM

Python 将CNN的输出传递给BILSTM,python,tensorflow,keras,lstm,cnn,Python,Tensorflow,Keras,Lstm,Cnn,我正在做一个项目,在这个项目中,我必须将CNN的输出传递给双向LSTM。我创建了如下的模型,但它抛出了“不兼容”错误。请让我知道我哪里出了问题,以及如何解决这个问题 model = Sequential() model.add(Conv2D(filters = 16, kernel_size = 3,input_shape = (32,32,1))) model.add(BatchNormalization()) model.add(MaxPooling2D(p

我正在做一个项目,在这个项目中,我必须将CNN的输出传递给双向LSTM。我创建了如下的模型,但它抛出了“不兼容”错误。请让我知道我哪里出了问题,以及如何解决这个问题


    model = Sequential()
    model.add(Conv2D(filters = 16, kernel_size = 3,input_shape = (32,32,1)))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2,2),strides=1, padding='valid'))
    model.add(Activation('relu'))
    
    model.add(Conv2D(filters = 32, kernel_size=3))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Activation('relu'))
    
    model.add(Dropout(0.25))
    model.add(Conv2D(filters = 48, kernel_size=3))
    model.add(BatchNormalization())
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Activation('relu'))
    
    model.add(Dropout(0.25))
    model.add(Conv2D(filters = 64, kernel_size=3))
    model.add(BatchNormalization())
    model.add(Activation('relu'))
    
    model.add(Dropout(0.25))
    model.add(Conv2D(filters = 80, kernel_size=3))
    model.add(BatchNormalization())
    model.add(Activation('relu'))
    
    model.add(Bidirectional(LSTM(150, return_sequences=True)))
    model.add(Dropout(0.3))
    model.add(Bidirectional(LSTM(96)))
    model.add(Dense(total_words/2, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
    model.add(Dense(total_words, activation='softmax'))
    
    model.summary()

返回的错误为:


    ValueError                                Traceback (most recent call last)
    <ipython-input-24-261befed7006> in <module>()
         27 model.add(Activation('relu'))
         28 
    ---> 29 model.add(Bidirectional(LSTM(150, return_sequences=True)))
         30 model.add(Dropout(0.3))
         31 model.add(Bidirectional(LSTM(96)))
    
    5 frames
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
        178                          'expected ndim=' + str(spec.ndim) + ', found ndim=' +
        179                          str(ndim) + '. Full shape received: ' +
    --> 180                          str(x.shape.as_list()))
        181     if spec.max_ndim is not None:
        182       ndim = x.shape.ndims
    
    ValueError: Input 0 of layer bidirectional is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, 1, 1, 80]


ValueError回溯(最近一次调用上次)
在()
27型号。添加(激活('relu'))
28
--->29模型添加(双向(LSTM(150,返回序列=True)))
30型号。添加(辍学(0.3))
31型号。添加(双向(LSTM(96)))
5帧
/断言输入兼容性(输入规范、输入、层名称)中的usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input\u spec.py
178“预期ndim=”+str(spec.ndim)+”,发现ndim=”+
179 str(ndim)+'。收到完整表格:'+
-->180 str(x.shape.as_list())
181如果spec.max\u ndim不是无:
182 ndim=x.shape.ndims
ValueError:层的输入0与层不兼容:预期ndim=3,发现ndim=4。收到完整形状:[无,1,1,80]

Conv2D
具有二维输入/输出,但LSTM采用一维输入。这就是为什么它需要3个维度(批次、序列、Hid),但找到4个维度(批次、X、Y、Hid)。解决方案是,例如,在CNN之后和LSTM之前使用
展平层,将输出投影到一维序列。

问题是传递到
LSTM
的数据,可以在网络内部解决。
LSTM
需要3D数据,而
Conv2D
产生4D数据。您可以采用两种可能性:

1)进行重塑
(批量大小,H,W*通道)

2)进行重塑
(批量大小,W,H*通道)

通过这些方式,可以在LSTM中使用3D数据。下面是一个例子

def ReshapeLayer(x):
    
    shape = x.shape
    
    # 1 possibility: H,W*channel
    reshape = Reshape((shape[1],shape[2]*shape[3]))(x)
    
    # 2 possibility: W,H*channel
    # transpose = Permute((2,1,3))(x)
    # reshape = Reshape((shape[1],shape[2]*shape[3]))(transpose)
    
    return reshape

model = Sequential()
model.add(Conv2D(filters = 16, kernel_size = 3, input_shape = (32,32,3)))
model.add(Lambda(ReshapeLayer))  # <============
model.add(LSTM(16))
model.add(Dense(units=2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam',)
model.summary()
def重塑层(x):
shape=x.shape
#1种可能性:H、W*通道
重塑=重塑(形状[1],形状[2]*形状[3])(x)
#2种可能性:W、H*通道
#转置=置换((2,1,3))(x)
#重塑=重塑((形状[1],形状[2]*形状[3])(转置)
返回整形
模型=顺序()
add(Conv2D(过滤器=16,内核大小=3,输入形状=(32,32,3)))

model.add(Lambda(重塑层))#感谢您的关注。添加展平将使输出为二维。所以我仍然遇到不兼容的问题。错误消息-“双向层的输入0与层不兼容:预期ndim=3,发现ndim=2。接收到完整形状:[无,80]”。有什么想法吗?