Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/362.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将模型与Keras合并';函数API_Python_Merge_Keras_Keras Layer - Fatal编程技术网

Python 将模型与Keras合并';函数API

Python 将模型与Keras合并';函数API,python,merge,keras,keras-layer,Python,Merge,Keras,Keras Layer,我是深入学习的新手,正在自学一些NLP概念。我试图通过学习Quora的句子相似性评分模型来理解这一点,正如本优秀教程中所解释的 这段代码是很久以前的,在尝试让它在新数据集上运行时,我发现模型合并API已经被弃用了。我已经坐了很长一段时间,努力合并模型(因为我不太了解复杂性) 有人能帮我确认一下我是否用正确的方法转换了这个吗?我的主要问题是输入序列已填充到max_len的40,那么这是否意味着我必须指定一个形状为40的输入层 站点上的旧代码: tk = text.Tokenizer(nb_word

我是深入学习的新手,正在自学一些NLP概念。我试图通过学习Quora的句子相似性评分模型来理解这一点,正如本优秀教程中所解释的

这段代码是很久以前的,在尝试让它在新数据集上运行时,我发现模型合并API已经被弃用了。我已经坐了很长一段时间,努力合并模型(因为我不太了解复杂性)

有人能帮我确认一下我是否用正确的方法转换了这个吗?我的主要问题是输入序列已填充到
max_len
的40,那么这是否意味着我必须指定一个形状为40的输入层

站点上的旧代码:

tk = text.Tokenizer(nb_words=200000)

max_len = 40
tk.fit_on_texts(list(data.question1.values) + list(data.question2.values.astype(str)))
x1 = tk.texts_to_sequences(data.question1.values)
x1 = sequence.pad_sequences(x1, maxlen=max_len)

x2 = tk.texts_to_sequences(data.question2.values.astype(str))
x2 = sequence.pad_sequences(x2, maxlen=max_len)

word_index = tk.word_index

ytrain_enc = np_utils.to_categorical(y)

embeddings_index = {}
f = open('data/glove.840B.300d.txt')
for line in tqdm(f):
    values = line.split()
    word = values[0]
    coefs = np.asarray(values[1:], dtype='float32')
    embeddings_index[word] = coefs
f.close()

print('Found %s word vectors.' % len(embeddings_index))

embedding_matrix = np.zeros((len(word_index) + 1, 300))
for word, i in tqdm(word_index.items()):
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        embedding_matrix[i] = embedding_vector

max_features = 200000
filter_length = 5
nb_filter = 64
pool_length = 4

model = Sequential()
print('Build model...')

model1 = Sequential()
model1.add(Embedding(len(word_index) + 1,
                     300,
                     weights=[embedding_matrix],
                     input_length=40,
                     trainable=False))

model1.add(TimeDistributed(Dense(300, activation='relu')))
model1.add(Lambda(lambda x: K.sum(x, axis=1), output_shape=(300,)))

model2 = Sequential()
model2.add(Embedding(len(word_index) + 1,
                     300,
                     weights=[embedding_matrix],
                     input_length=40,
                     trainable=False))

model2.add(TimeDistributed(Dense(300, activation='relu')))
model2.add(Lambda(lambda x: K.sum(x, axis=1), output_shape=(300,)))

model3 = Sequential()
model3.add(Embedding(len(word_index) + 1,
                     300,
                     weights=[embedding_matrix],
                     input_length=40,
                     trainable=False))
model3.add(Convolution1D(nb_filter=nb_filter,
                         filter_length=filter_length,
                         border_mode='valid',
                         activation='relu',
                         subsample_length=1))
model3.add(Dropout(0.2))

model3.add(Convolution1D(nb_filter=nb_filter,
                         filter_length=filter_length,
                         border_mode='valid',
                         activation='relu',
                         subsample_length=1))

model3.add(GlobalMaxPooling1D())
model3.add(Dropout(0.2))

model3.add(Dense(300))
model3.add(Dropout(0.2))
model3.add(BatchNormalization())

model4 = Sequential()
model4.add(Embedding(len(word_index) + 1,
                     300,
                     weights=[embedding_matrix],
                     input_length=40,
                     trainable=False))
model4.add(Convolution1D(nb_filter=nb_filter,
                         filter_length=filter_length,
                         border_mode='valid',
                         activation='relu',
                         subsample_length=1))
model4.add(Dropout(0.2))

model4.add(Convolution1D(nb_filter=nb_filter,
                         filter_length=filter_length,
                         border_mode='valid',
                         activation='relu',
                         subsample_length=1))

model4.add(GlobalMaxPooling1D())
model4.add(Dropout(0.2))

model4.add(Dense(300))
model4.add(Dropout(0.2))
model4.add(BatchNormalization())
model5 = Sequential()
model5.add(Embedding(len(word_index) + 1, 300, input_length=40, dropout=0.2))
model5.add(LSTM(300, dropout_W=0.2, dropout_U=0.2))

model6 = Sequential()
model6.add(Embedding(len(word_index) + 1, 300, input_length=40, dropout=0.2))
model6.add(LSTM(300, dropout_W=0.2, dropout_U=0.2))

merged_model = Sequential()
merged_model.add(Merge([model1, model2, model3, model4, model5, model6], mode='concat'))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(1))
merged_model.add(Activation('sigmoid'))
tk = text.Tokenizer(nb_words=200000, filters='!"#$%&*+,/;<>?@\^`{|}~')

max_len = 40
tk.fit_on_texts(list(data.log1.values) + list(data.log2.values.astype(str)))
x1 = tk.texts_to_sequences(data.log1.values)
x1 = sequence.pad_sequences(x1, maxlen=max_len)

x2 = tk.texts_to_sequences(data.log2.values.astype(str))
x2 = sequence.pad_sequences(x2, maxlen=max_len)

word_index = tk.word_index

ytrain_enc = np_utils.to_categorical(y)

embeddings_index = {}
f = open('data/glove.840B.300d.txt')
for line in tqdm(f):
    values = line.split()
    word = values[0]
    coefs = np.asarray(values[1:], dtype='float32')
    embeddings_index[word] = coefs
f.close()

print('Found %s word vectors.' % len(embeddings_index))

embedding_matrix = np.zeros((len(word_index) + 1, 300))
for word, i in tqdm(word_index.items()):
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        embedding_matrix[i] = embedding_vector

max_features = 200000
filter_length = 5
nb_filter = 64
pool_length = 4

input_layer = Input(shape=(40,))
print('Build model...')

# Converting everything to Keras' functional API
# We don't need model1 = Sequential() since the functional API specifies this
# implicitly
model1 = Embedding(len(word_index) + 1, 300, weights=[embedding_matrix],
                   input_length=40, trainable=False)(input_layer)
model1 = TimeDistributed(Dense(300, activation='relu'))(model1)
model1 = Lambda(lambda x: K.sum(x, axis=1), output_shape=(300,))(model1)

model2 = Embedding(len(word_index) + 1, 300, weights=[embedding_matrix],
                   input_length=40, trainable=False)(input_layer)
model2 = TimeDistributed(Dense(300, activation='relu'))(model2)
model2 = Lambda(lambda x: K.sum(x, axis=1), output_shape=(300,))(model2)

model3 = Embedding(len(word_index) + 1, 300, weights=[embedding_matrix],
                   input_length=40, trainable=False)(input_layer)
model3 = Convolution1D(nb_filter=nb_filter, filter_length=filter_length,
                       border_mode='valid', activation='relu',
                       subsample_length=1)(model3)
model3 = Dropout(0.2)(model3)
model3 = Convolution1D(nb_filter=nb_filter, filter_length=filter_length,
                       border_mode='valid', activation='relu',
                       subsample_length=1)(model3)
model3 = GlobalMaxPooling1D()(model3)
model3 = Dropout(0.2)(model3)
model3 = Dense(300)(model3)
model3 = Dropout(0.2)(model3)
model3 = BatchNormalization()(model3)

model4 = Embedding(len(word_index) + 1, 300, weights=[embedding_matrix],
                   input_length=40, trainable=False)(input_layer)
model4 = Convolution1D(nb_filter=nb_filter, filter_length=filter_length, border_mode='valid',
                       activation='relu', subsample_length=1)(model4)
model4 = Dropout(0.2)(model4)
model4 = Convolution1D(nb_filter=nb_filter, filter_length=filter_length, border_mode='valid',
                       activation='relu', subsample_length=1)(model4)
model4 = GlobalMaxPooling1D()(model4)
model4 = Dropout(0.2)(model4)
model4 = Dense(300)(model4)
model4 = Dropout(0.2)(model4)
model4 = BatchNormalization()(model4)

model5 = Embedding(len(word_index) + 1, 300, input_length=40, dropout=0.2)(input_layer)
model5 = LSTM(300, dropout_W=0.2, dropout_U=0.2)(model5)

model6 = Embedding(len(word_index) + 1, 300, input_length=40, dropout=0.2)(input_layer)
model6 = LSTM(300, dropout_W=0.2, dropout_U=0.2)(model6)


merged_model = concatenate([model1, model2, model3, model4, model5, model6])

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
# merged_model = Dense(1)(merged_model)
# merged_model = Activation('sigmoid')(merged_model)

predictions = Dense(1, activation='sigmoid')(merged_model)
model = Model(inputs=input_layer, outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

model.summary()

checkpoint = ModelCheckpoint('weights.h5', monitor='val_acc', save_best_only=True, verbose=2)
model.fit([x1, x2, x1, x2, x1, x2], y=y, batch_size=384, nb_epoch=200,
                 verbose=1, validation_split=0.1, shuffle=True, callbacks=[checkpoint])
到目前为止,我所拥有的(从指定的体系结构和总体上):

我尝试使用函数式API进行合并:

tk = text.Tokenizer(nb_words=200000)

max_len = 40
tk.fit_on_texts(list(data.question1.values) + list(data.question2.values.astype(str)))
x1 = tk.texts_to_sequences(data.question1.values)
x1 = sequence.pad_sequences(x1, maxlen=max_len)

x2 = tk.texts_to_sequences(data.question2.values.astype(str))
x2 = sequence.pad_sequences(x2, maxlen=max_len)

word_index = tk.word_index

ytrain_enc = np_utils.to_categorical(y)

embeddings_index = {}
f = open('data/glove.840B.300d.txt')
for line in tqdm(f):
    values = line.split()
    word = values[0]
    coefs = np.asarray(values[1:], dtype='float32')
    embeddings_index[word] = coefs
f.close()

print('Found %s word vectors.' % len(embeddings_index))

embedding_matrix = np.zeros((len(word_index) + 1, 300))
for word, i in tqdm(word_index.items()):
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        embedding_matrix[i] = embedding_vector

max_features = 200000
filter_length = 5
nb_filter = 64
pool_length = 4

model = Sequential()
print('Build model...')

model1 = Sequential()
model1.add(Embedding(len(word_index) + 1,
                     300,
                     weights=[embedding_matrix],
                     input_length=40,
                     trainable=False))

model1.add(TimeDistributed(Dense(300, activation='relu')))
model1.add(Lambda(lambda x: K.sum(x, axis=1), output_shape=(300,)))

model2 = Sequential()
model2.add(Embedding(len(word_index) + 1,
                     300,
                     weights=[embedding_matrix],
                     input_length=40,
                     trainable=False))

model2.add(TimeDistributed(Dense(300, activation='relu')))
model2.add(Lambda(lambda x: K.sum(x, axis=1), output_shape=(300,)))

model3 = Sequential()
model3.add(Embedding(len(word_index) + 1,
                     300,
                     weights=[embedding_matrix],
                     input_length=40,
                     trainable=False))
model3.add(Convolution1D(nb_filter=nb_filter,
                         filter_length=filter_length,
                         border_mode='valid',
                         activation='relu',
                         subsample_length=1))
model3.add(Dropout(0.2))

model3.add(Convolution1D(nb_filter=nb_filter,
                         filter_length=filter_length,
                         border_mode='valid',
                         activation='relu',
                         subsample_length=1))

model3.add(GlobalMaxPooling1D())
model3.add(Dropout(0.2))

model3.add(Dense(300))
model3.add(Dropout(0.2))
model3.add(BatchNormalization())

model4 = Sequential()
model4.add(Embedding(len(word_index) + 1,
                     300,
                     weights=[embedding_matrix],
                     input_length=40,
                     trainable=False))
model4.add(Convolution1D(nb_filter=nb_filter,
                         filter_length=filter_length,
                         border_mode='valid',
                         activation='relu',
                         subsample_length=1))
model4.add(Dropout(0.2))

model4.add(Convolution1D(nb_filter=nb_filter,
                         filter_length=filter_length,
                         border_mode='valid',
                         activation='relu',
                         subsample_length=1))

model4.add(GlobalMaxPooling1D())
model4.add(Dropout(0.2))

model4.add(Dense(300))
model4.add(Dropout(0.2))
model4.add(BatchNormalization())
model5 = Sequential()
model5.add(Embedding(len(word_index) + 1, 300, input_length=40, dropout=0.2))
model5.add(LSTM(300, dropout_W=0.2, dropout_U=0.2))

model6 = Sequential()
model6.add(Embedding(len(word_index) + 1, 300, input_length=40, dropout=0.2))
model6.add(LSTM(300, dropout_W=0.2, dropout_U=0.2))

merged_model = Sequential()
merged_model.add(Merge([model1, model2, model3, model4, model5, model6], mode='concat'))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(300))
merged_model.add(PReLU())
merged_model.add(Dropout(0.2))
merged_model.add(BatchNormalization())

merged_model.add(Dense(1))
merged_model.add(Activation('sigmoid'))
tk = text.Tokenizer(nb_words=200000, filters='!"#$%&*+,/;<>?@\^`{|}~')

max_len = 40
tk.fit_on_texts(list(data.log1.values) + list(data.log2.values.astype(str)))
x1 = tk.texts_to_sequences(data.log1.values)
x1 = sequence.pad_sequences(x1, maxlen=max_len)

x2 = tk.texts_to_sequences(data.log2.values.astype(str))
x2 = sequence.pad_sequences(x2, maxlen=max_len)

word_index = tk.word_index

ytrain_enc = np_utils.to_categorical(y)

embeddings_index = {}
f = open('data/glove.840B.300d.txt')
for line in tqdm(f):
    values = line.split()
    word = values[0]
    coefs = np.asarray(values[1:], dtype='float32')
    embeddings_index[word] = coefs
f.close()

print('Found %s word vectors.' % len(embeddings_index))

embedding_matrix = np.zeros((len(word_index) + 1, 300))
for word, i in tqdm(word_index.items()):
    embedding_vector = embeddings_index.get(word)
    if embedding_vector is not None:
        embedding_matrix[i] = embedding_vector

max_features = 200000
filter_length = 5
nb_filter = 64
pool_length = 4

input_layer = Input(shape=(40,))
print('Build model...')

# Converting everything to Keras' functional API
# We don't need model1 = Sequential() since the functional API specifies this
# implicitly
model1 = Embedding(len(word_index) + 1, 300, weights=[embedding_matrix],
                   input_length=40, trainable=False)(input_layer)
model1 = TimeDistributed(Dense(300, activation='relu'))(model1)
model1 = Lambda(lambda x: K.sum(x, axis=1), output_shape=(300,))(model1)

model2 = Embedding(len(word_index) + 1, 300, weights=[embedding_matrix],
                   input_length=40, trainable=False)(input_layer)
model2 = TimeDistributed(Dense(300, activation='relu'))(model2)
model2 = Lambda(lambda x: K.sum(x, axis=1), output_shape=(300,))(model2)

model3 = Embedding(len(word_index) + 1, 300, weights=[embedding_matrix],
                   input_length=40, trainable=False)(input_layer)
model3 = Convolution1D(nb_filter=nb_filter, filter_length=filter_length,
                       border_mode='valid', activation='relu',
                       subsample_length=1)(model3)
model3 = Dropout(0.2)(model3)
model3 = Convolution1D(nb_filter=nb_filter, filter_length=filter_length,
                       border_mode='valid', activation='relu',
                       subsample_length=1)(model3)
model3 = GlobalMaxPooling1D()(model3)
model3 = Dropout(0.2)(model3)
model3 = Dense(300)(model3)
model3 = Dropout(0.2)(model3)
model3 = BatchNormalization()(model3)

model4 = Embedding(len(word_index) + 1, 300, weights=[embedding_matrix],
                   input_length=40, trainable=False)(input_layer)
model4 = Convolution1D(nb_filter=nb_filter, filter_length=filter_length, border_mode='valid',
                       activation='relu', subsample_length=1)(model4)
model4 = Dropout(0.2)(model4)
model4 = Convolution1D(nb_filter=nb_filter, filter_length=filter_length, border_mode='valid',
                       activation='relu', subsample_length=1)(model4)
model4 = GlobalMaxPooling1D()(model4)
model4 = Dropout(0.2)(model4)
model4 = Dense(300)(model4)
model4 = Dropout(0.2)(model4)
model4 = BatchNormalization()(model4)

model5 = Embedding(len(word_index) + 1, 300, input_length=40, dropout=0.2)(input_layer)
model5 = LSTM(300, dropout_W=0.2, dropout_U=0.2)(model5)

model6 = Embedding(len(word_index) + 1, 300, input_length=40, dropout=0.2)(input_layer)
model6 = LSTM(300, dropout_W=0.2, dropout_U=0.2)(model6)


merged_model = concatenate([model1, model2, model3, model4, model5, model6])

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
merged_model = Dense(300)(merged_model)
merged_model = PReLU()(merged_model)
merged_model = Dropout(0.2)(merged_model)

merged_model = BatchNormalization()(merged_model)
# merged_model = Dense(1)(merged_model)
# merged_model = Activation('sigmoid')(merged_model)

predictions = Dense(1, activation='sigmoid')(merged_model)
model = Model(inputs=input_layer, outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

model.summary()

checkpoint = ModelCheckpoint('weights.h5', monitor='val_acc', save_best_only=True, verbose=2)
model.fit([x1, x2, x1, x2, x1, x2], y=y, batch_size=384, nb_epoch=200,
                 verbose=1, validation_split=0.1, shuffle=True, callbacks=[checkpoint])
tk=text.Tokenizer(nb#u words=200000,filters='!“#$%&*+,/;?@^`{124;}~)
最大长度=40
文本上的tk.fit(list(data.log1.values)+list(data.log2.values.astype(str)))
x1=tk.文本到序列(data.log1.值)
x1=序列。焊盘序列(x1,最大长度=最大长度)
x2=tk.text_到_序列(data.log2.values.astype(str))
x2=序列。焊盘序列(x2,最大长度=最大长度)
word\u index=tk.word\u index
ytrain_enc=np_utils.to_category(y)
嵌入_索引={}
f=打开('data/globet.840B.300d.txt'))
对于tqdm(f)中的行:
values=line.split()
字=值[0]
coefs=np.asarray(值[1:],dtype='float32')
嵌入索引[word]=coefs
f、 关闭()
打印('找到%s个字向量。'%len(嵌入索引))
嵌入矩阵=np.0((len(单词索引)+1300))
对于word,TQM中的i(word_index.items()):
嵌入向量=嵌入索引.get(word)
如果嵌入_向量不是无:
嵌入矩阵[i]=嵌入向量
最大功能=200000
过滤器长度=5
nb_过滤器=64
池长=4
输入层=输入(形状=(40,))
打印('生成模型…')
#将所有内容转换为Keras的功能API
#我们不需要model1=Sequential(),因为函数API指定了这一点
#含蓄地
模型1=嵌入(len(单词索引)+1300,权重=[嵌入矩阵],
输入长度=40,可训练=False)(输入层)
model1=时间分布(密集(300,activation='relu'))(model1)
模型1=λ(λx:K.sum(x,轴=1),输出形状=(300,)(模型1)
模型2=嵌入(len(单词索引)+1300,权重=[嵌入矩阵],
输入长度=40,可训练=False)(输入层)
模型2=时间分布(密集型(300,激活=relu'))(模型2)
模型2=λ(λx:K.sum(x,轴=1),输出_形=(300,)(模型2)
模型3=嵌入(len(单词索引)+1300,权重=[嵌入矩阵],
输入长度=40,可训练=False)(输入层)
模型3=卷积1D(nb_过滤器=nb_过滤器,过滤器长度=过滤器长度,
border_mode='valid',activation='relu',
子样本长度=1)(模型3)
模型3=辍学(0.2)(模型3)
模型3=卷积1D(nb_过滤器=nb_过滤器,过滤器长度=过滤器长度,
border_mode='valid',activation='relu',
子样本长度=1)(模型3)
model3=globalMapooling1d()(model3)
模型3=辍学(0.2)(模型3)
模型3=密度(300)(模型3)
模型3=辍学(0.2)(模型3)
model3=BatchNormalization()(model3)
模型4=嵌入(len(单词索引)+1300,权重=[嵌入矩阵],
输入长度=40,可训练=False)(输入层)
model4=卷积1d(nb_filter=nb_filter,filter_length=filter_length,border_mode='valid',
激活='relu',子样本长度=1)(型号4)
模型4=辍学(0.2)(模型4)
model4=卷积1d(nb_filter=nb_filter,filter_length=filter_length,border_mode='valid',
激活='relu',子样本长度=1)(型号4)
model4=globalMapooling1d()(model4)
模型4=辍学(0.2)(模型4)
模型4=密度(300)(模型4)
模型4=辍学(0.2)(模型4)
model4=BatchNormalization()(model4)
模型5=嵌入(len(单词索引)+1300,输入长度=40,辍学=0.2)(输入层)
模型5=LSTM(300,辍学率W=0.2,辍学率W=0.2)(模型5)
模型6=嵌入(len(单词索引)+1300,输入长度=40,辍学=0.2)(输入层)
模型6=LSTM(300,辍学率=0.2,辍学率=0.2)(模型6)
合并的模型=连接([model1、model2、model3、model4、model5、model6])
合并的_模型=批处理规范化()(合并的_模型)
合并的_模型=密集(300)(合并的_模型)
合并的_模型=PReLU()(合并的_模型)
合并的_模型=退出(0.2)(合并的_模型)
合并的_模型=批处理规范化()(合并的_模型)
合并的_模型=密集(300)(合并的_模型)
合并的_模型=PReLU()(合并的_模型)
合并的_模型=退出(0.2)(合并的_模型)
合并的_模型=批处理规范化()(合并的_模型)
合并的_模型=密集(300)(合并的_模型)
合并的_模型=PReLU()(合并的_模型)
合并的_模型=退出(0.2)(合并的_模型)
合并的_模型=批处理规范化()(合并的_模型)
合并的_模型=密集(300)(合并的_模型)
合并的_模型=PReLU()(合并的_模型)
合并的_模型=退出(0.2)(合并的_模型)
合并的_模型=批处理规范化()(合并的_模型)
合并的_模型=密集(300)(合并的_模型)
合并的_模型=PReLU()(合并的_模型)
合并的_模型=退出(0.2)(合并的_模型)
合并的_模型=批处理规范化()(合并的_模型)
#合并的_模型=密集(1)(合并的_模型)
#合并模型=激活('sigmoid')(合并模型)
预测=密集(1,激活='sigmoid')(合并模型)
模型=模型(输入=输入层,输出=预测)
compile(loss='binary\u crossentropy',optimizer='adam',metrics=['accurity'])
model.summary()
检查点=模型检查点('weights.h5',monitor='val\u acc',save\u best\u only=True,verbose=2)
模型拟合([x1,x2,x1,x2,x1,x2],y=y,批次尺寸=384,nb=200,
verbose=1,validation\u split=0.1,shuffle=True,callbacks=[checkpoint])
我不确定我做的事情是否完全正确,所以我请求som