Nlp keras中CNN和RNN模型的集成

Nlp keras中CNN和RNN模型的集成,nlp,deep-learning,keras,rnn,Nlp,Deep Learning,Keras,Rnn,尝试在keras中实现纸上模型 模型如下所示(摘自论文) 我的代码是 document_input = Input(shape=(None,), dtype='int32') embedding_layer = Embedding(vocab_size, WORD_EMB_SIZE, weights=[initial_embeddings], input_length=DOC_SEQ_LEN, trainable=True) c

尝试在keras中实现纸上模型

模型如下所示(摘自论文)

我的代码是

document_input = Input(shape=(None,), dtype='int32')
embedding_layer = Embedding(vocab_size, WORD_EMB_SIZE, weights=[initial_embeddings], 
                                input_length=DOC_SEQ_LEN, trainable=True)
convs = []
filter_sizes = [2,3,4,5]

doc_embedding = embedding_layer(document_input)
for filter_size in filter_sizes:
    l_conv = Conv1D(filters=256, kernel_size=filter_size, padding='same', activation='relu')(doc_embedding)
    l_pool = MaxPooling1D(filter_size)(l_conv)
    convs.append(l_pool)

l_merge = Concatenate(axis=1)(convs)
l_flat = Flatten()(l_merge)
l_dense = Dense(100, activation='relu')(l_flat)
l_dense_3d = Reshape((1,int(l_dense.shape[1])))(l_dense)

gene_variation_input = Input(shape=(None,), dtype='int32')
gene_variation_embedding = embedding_layer(gene_variation_input)
rnn_layer = LSTM(100, return_sequences=False, stateful=True)(gene_variation_embedding,initial_state=[l_dense_3d])

l_flat = Flatten()(rnn_layer)
output_layer = Dense(9, activation='softmax')(l_flat)
model = Model(inputs=[document_input,gene_variation_input], outputs=[output_layer])
我不知道我是否在上图右侧设置文本特征向量!我试过了,我得到的错误是

ValueError: Layer lstm_9 expects 3 inputs, but it received 2 input tensors. Input received: [<tf.Tensor 'embedding_10_1/Gather:0' shape=(?, ?, 200) dtype=float32>, <tf.Tensor 'reshape_9/Reshape:0' shape=(?, 1, 100) dtype=float32>]
模型摘要

____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_8 (InputLayer)             (32, 9)               0                                            
____________________________________________________________________________________________________
input_7 (InputLayer)             (32, 4000)            0                                            
____________________________________________________________________________________________________
embedding_6 (Embedding)          multiple              73764400    input_7[0][0]                    
                                                                   input_8[0][0]                    
____________________________________________________________________________________________________
conv1d_13 (Conv1D)               (32, 4000, 256)       102656      embedding_6[0][0]                
____________________________________________________________________________________________________
conv1d_14 (Conv1D)               (32, 4000, 256)       153856      embedding_6[0][0]                
____________________________________________________________________________________________________
conv1d_15 (Conv1D)               (32, 4000, 256)       205056      embedding_6[0][0]                
____________________________________________________________________________________________________
conv1d_16 (Conv1D)               (32, 4000, 256)       256256      embedding_6[0][0]                
____________________________________________________________________________________________________
max_pooling1d_13 (MaxPooling1D)  (32, 2000, 256)       0           conv1d_13[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_14 (MaxPooling1D)  (32, 1333, 256)       0           conv1d_14[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_15 (MaxPooling1D)  (32, 1000, 256)       0           conv1d_15[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_16 (MaxPooling1D)  (32, 800, 256)        0           conv1d_16[0][0]                  
____________________________________________________________________________________________________
concatenate_4 (Concatenate)      (32, 5133, 256)       0           max_pooling1d_13[0][0]           
                                                                   max_pooling1d_14[0][0]           
                                                                   max_pooling1d_15[0][0]           
                                                                   max_pooling1d_16[0][0]           
____________________________________________________________________________________________________
flatten_4 (Flatten)              (32, 1314048)         0           concatenate_4[0][0]              
____________________________________________________________________________________________________
dense_6 (Dense)                  (32, 100)             131404900   flatten_4[0][0]                  
____________________________________________________________________________________________________
lstm_4 (LSTM)                    (32, 100)             120400      embedding_6[1][0]                
                                                                   dense_6[0][0]                    
                                                                   dense_6[0][0]                    
____________________________________________________________________________________________________
dense_7 (Dense)                  (32, 9)               909         lstm_4[0][0]                     
====================================================================================================
Total params: 206,008,433
Trainable params: 206,008,433
Non-trainable params: 0
____________________________________________________________________________________________________

LSTM有2个隐藏状态,但仅提供1个初始状态。您可以执行以下操作之一:

将LSTM替换为只有1个隐藏状态的RNN,例如GRU:

rnn_layer = GRU(100, return_sequences=False, stateful=True)
(gene_variation_embedding,initial_state=[l_dense_3d])
或传递零作为LSTM第二个隐藏状态的初始状态:

zeros = Lambda(lambda x: K.zeros_like(x), output_shape=lambda s: s)(l_dense_3d)
rnn_layer = LSTM(100, return_sequences=False, stateful=True)
(gene_variation_embedding,initial_state=[l_dense_3d, zeros])

initial_states
不应该在
LSTM
call中吗?基于github和code中的一些问题,它必须在传递的参数中。我正在尝试recurrentShopYes-但您将其传递给了
Embedding
,而不是
LSTM
。您是对的。那是个打字错误。我会修正它,我认为它的初始状态是h_0和c_0。通过keras阅读后,有状态的定义是明确的。但是我只想设置h_0和c_0的状态,但是stateful=False,看起来keras支持。stateful用于当您希望网络跨批记住状态时,这不是一回事。@farizrahman4u带隐藏状态=K.variable(value=np.zeros((1,10)))amd cell_states=K.variable(value=np.zeros((1,10)))lstm=lstm(10)(输入,初始状态=[隐藏状态,单元格状态])我得到TypeError:'list'对象不可调用。
zeros = Lambda(lambda x: K.zeros_like(x), output_shape=lambda s: s)(l_dense_3d)
rnn_layer = LSTM(100, return_sequences=False, stateful=True)
(gene_variation_embedding,initial_state=[l_dense_3d, zeros])