Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/353.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/c/55.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 凯拉斯CNN-RNN赢得';我不训练。可能需要一些调试_Python_Tensorflow_Keras_Conv Neural Network_Rnn - Fatal编程技术网

Python 凯拉斯CNN-RNN赢得';我不训练。可能需要一些调试

Python 凯拉斯CNN-RNN赢得';我不训练。可能需要一些调试,python,tensorflow,keras,conv-neural-network,rnn,Python,Tensorflow,Keras,Conv Neural Network,Rnn,我有一个二元分类问题,我正在训练,我相当成功地通过一个预先训练过的嵌入来传递我的数据,然后几个CNN并行,汇集结果,然后使用一个密集层来预测类。但是,当我在CNN之后分层RNN时,训练完全失败了。代码如下(这是一篇很长的文章) 这是CNN唯一的工作模式。我的输入是长度为100的向量 inputs=L.Input(shape=(100)) embedding=L.Embedding(input_dim=weights.shape[0],\ out

我有一个二元分类问题,我正在训练,我相当成功地通过一个预先训练过的嵌入来传递我的数据,然后几个CNN并行,汇集结果,然后使用一个密集层来预测类。但是,当我在CNN之后分层RNN时,训练完全失败了。代码如下(这是一篇很长的文章)

这是CNN唯一的工作模式。我的输入是长度为100的向量

inputs=L.Input(shape=(100))
embedding=L.Embedding(input_dim=weights.shape[0],\
                          output_dim=weights.shape[1],\
                          input_length=100,\
                          weights=[weights],\
                          trainable=False)(inputs)
conv3 = L.Conv1D(m, kernel_size=(3))(dropout)
conv4 = L.Conv1D(m, kernel_size=(4))(dropout)
conv5 = L.Conv1D(m, kernel_size=(5))(dropout)
maxpool3 = L.MaxPool1D(pool_size=(100-3+1, ), strides=(1,))(conv3)
maxpool4 = L.MaxPool1D(pool_size=(100-4+1, ), strides=(1,))(conv4)
maxpool5 = L.MaxPool1D(pool_size=(100-5+1, ), strides=(1,))(conv5)
concatenated_tensor = L.Concatenate(axis=1)([maxpool3,maxpool4,maxpool5])
flattened = L.Flatten()(concatenated_tensor)
output = L.Dense(units=1, activation='sigmoid')(flattened)
以下是总结:

____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_25 (InputLayer)            (None, 100)           0                                            
____________________________________________________________________________________________________
embedding_25 (Embedding)         (None, 100, 50)       451300      input_25[0][0]                   
____________________________________________________________________________________________________
dropout_25 (Dropout)             (None, 100, 50)       0           embedding_25[0][0]               
____________________________________________________________________________________________________
conv1d_73 (Conv1D)               (None, 98, 100)       15100       dropout_25[0][0]                 
____________________________________________________________________________________________________
conv1d_74 (Conv1D)               (None, 97, 100)       20100       dropout_25[0][0]                 
____________________________________________________________________________________________________
conv1d_75 (Conv1D)               (None, 96, 100)       25100       dropout_25[0][0]                 
____________________________________________________________________________________________________
max_pooling1d_73 (MaxPooling1D)  (None, 1, 100)        0           conv1d_73[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_74 (MaxPooling1D)  (None, 1, 100)        0           conv1d_74[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_75 (MaxPooling1D)  (None, 1, 100)        0           conv1d_75[0][0]                  
____________________________________________________________________________________________________
concatenate_25 (Concatenate)     (None, 3, 100)        0           max_pooling1d_73[0][0]           
                                                                   max_pooling1d_74[0][0]           
                                                                   max_pooling1d_75[0][0]           
____________________________________________________________________________________________________
flatten_25 (Flatten)             (None, 300)           0           concatenate_25[0][0]             
____________________________________________________________________________________________________
dense_47 (Dense)                 (None, 1)             301         flatten_25[0][0]                 
====================================================================================================
正如我上面所说的,这种方法相当有效,只需3-4个纪元就可以获得很好的准确性。然而,我的思维过程是CNN识别区域模式,但如果我还想在给定输入向量的较长距离内对这些模式如何相互关联进行建模,我应该在卷积后使用RNN的一些风格。因此,我尝试在卷积后更改
maxpoolg1d
层的
pool\u大小
,删除
展平
,并将
串联
层传递到RNN中。比如说

maxpool3 = L.MaxPool1D(pool_size=((50,), strides=(1,))(conv3)
maxpool4 = L.MaxPool1D(pool_size=((50,), strides=(1,))(conv4)
maxpool5 = L.MaxPool1D(pool_size=(49,), strides=(1,))(conv5)
concatenated_tensor = L.Concatenate(axis=1)([maxpool3,maxpool4,maxpool5])
rnn=L.SimpleRNN(75)(concatenated_tensor) 
output = L.Dense(units=1, activation='sigmoid')(rnn)
现在的总结是:

max_pooling1d_95 (MaxPooling1D)  (None, 50, 100)       0           conv1d_97[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_96 (MaxPooling1D)  (None, 50, 100)       0           conv1d_98[0][0]                  
____________________________________________________________________________________________________
max_pooling1d_97 (MaxPooling1D)  (None, 49, 100)       0           conv1d_99[0][0]                  
____________________________________________________________________________________________________
concatenate_32 (Concatenate)     (None, 149, 100)      0           max_pooling1d_95[0][0]           
                                                                   max_pooling1d_96[0][0]           
                                                                   max_pooling1d_97[0][0]           
____________________________________________________________________________________________________
simple_rnn_5 (SimpleRNN)         (None, 75)            13200       concatenate_32[0][0]             
____________________________________________________________________________________________________
dense_51 (Dense)                 (None, 1)             76          simple_rnn_5[0][0]               
====================================================================================================

当我训练模型时,预测结果完全相同:类[1]与类[0]的比率。我读过几篇文章,其中人们成功地使用了这个方案,很明显我做错了什么,我打赌这是一个令人尴尬的愚蠢错误。有人愿意帮忙诊断吗?

您可以尝试的第一件事是沿着特征轴而不是时间轴进行连接。 基本上可以这样做:

maxpool3 = L.MaxPool1D(pool_size=(50,), strides=(1,))(conv3)
maxpool4 = L.MaxPool1D(pool_size=(50,), strides=(1,))(conv4)
maxpool5 = L.MaxPool1D(pool_size=(50,), strides=(1,))(conv5)
concatenated_tensor = L.Concatenate(axis=2)([maxpool3,maxpool4,maxpool5])
rnn=L.SimpleRNN(75)(concatenated_tensor) 
output = L.Dense(units=1, activation='sigmoid')(rnn)
(请注意,必须确保maxpool3、maxpool4和maxpool5的“时间”步数或maxpool3.shape[1]=maxpool4.shape[1]=maxpool5.shape[1])


其次,使用50个时间步,给LSTM或GRU一个机会,因为它们可以比LSTM更好地捕获更长的时间依赖关系。

您是否使用双向LSTM而不进行卷积?如果使用LSTM,通常不需要这些卷积层。无论如何,我认为在这种情况下的问题是,你的循环层使用149轴作为序列轴,这是你想要做的吗?RNN输入:带形状的3D张量(批量大小、时间步长、输入尺寸)。我尝试过几种不同风格的双向RNN,其成功率远远低于CNN方案,信不信由你。感谢RNN轴上的提示——我一定会研究它。我想你最后指的是“RNN”(…比LSTM更好)。然而,我还应该注意到,
串联张量
并没有写入RNN。错误是: