Machine learning 深度学习模型在Keras中的实现

Machine learning 深度学习模型在Keras中的实现,machine-learning,keras,deep-learning,tensorflow2.0,Machine Learning,Keras,Deep Learning,Tensorflow2.0,我试图在Keras中实现神经网络模型,但我遇到了一个维度问题。根据模型架构,我应该从最后一个(完全连接的)层得到1作为输出维度,但我得到的是2D数据作为输出 我正在尝试从中实现图4 我的实施情况如下: import numpy as np from tensorflow.keras import layers, Model from tensorflow.keras.optimizers import Adam, RMSprop from tensorflow.keras.utils impo

我试图在Keras中实现神经网络模型,但我遇到了一个维度问题。根据模型架构,我应该从最后一个(完全连接的)层得到1作为输出维度,但我得到的是2D数据作为输出

我正在尝试从中实现图4

我的实施情况如下:

import numpy as np
from tensorflow.keras import layers, Model
from tensorflow.keras.optimizers import Adam, RMSprop
from tensorflow.keras.utils import plot_model
from tensorflow.python.keras.regularizers import l2
from keras.datasets import imdb
from keras.preprocessing import sequence

def get_model(vocabulary_size, embedding_dim, input_length, summary=True):
  inputs = layers.Input(shape=(input_length))
  x = layers.Embedding(vocabulary_size, embedding_dim)(inputs)

  branch1 = layers.Conv1D(128, kernel_size=(3),padding='same',kernel_regularizer=l2(0.01), activation='relu')(x)
  branch1 = layers.MaxPool1D(pool_size=(2))(branch1)
  branch1 = layers.Dropout(0.5)(branch1)
  branch1 = layers.BatchNormalization()(branch1)
  branch1 = layers.LSTM(128, return_sequences=True)(branch1)

  branch2 = layers.Conv1D(128, kernel_size=(5),padding='same',kernel_regularizer=l2(0.01), activation='relu')(x)
  branch2 = layers.MaxPool1D(pool_size=(2))(branch2)
  branch2 = layers.Dropout(0.5)(branch2)
  branch2 = layers.BatchNormalization()(branch2)
  branch2 = layers.LSTM(128, return_sequences=True)(branch2)


  branch3 = layers.Conv1D(128, kernel_size=(7),padding='same',kernel_regularizer=l2(0.01), activation='relu')(x)
  branch3 = layers.MaxPool1D(pool_size=(2))(branch3)
  branch3 = layers.Dropout(0.5)(branch3)
  branch3 = layers.BatchNormalization()(branch3)
  branch3 = layers.LSTM(128, return_sequences=True)(branch3)

  branch4 = layers.Conv1D(128, kernel_size=(9),padding='same',kernel_regularizer=l2(0.01), activation='relu')(x)
  branch4 = layers.MaxPool1D(pool_size=(2))(branch4)
  branch4 = layers.Dropout(0.5)(branch4)
  branch4 = layers.BatchNormalization()(branch4)
  branch4 = layers.LSTM(128, return_sequences=True)(branch4)

  concat = layers.concatenate([branch1, branch2, branch3, branch4], name='Concatenate')

  outputs = layers.Dense(1, activation='sigmoid')(concat)


  model = Model(inputs, outputs)
  model.compile(loss='binary_crossentropy', optimizer=Adam(lr=1e-3), metrics='accuracy')
  
  print(model.summary())

  return model

EMBEDDING_DIM = 32
VOCABULARY_SIZE = 5000
seq_length = 500

my_model = get_model(VOCABULARY_SIZE, EMBEDDING_DIM, seq_length)

在最后一个LSTM层中需要return_sequences=False,以便它只返回最后一个隐藏状态。这样它只返回一个向量。因此,4个分支返回4个向量,这些向量被连接成一个


更多详细信息:

您好,欢迎来到StackOverflow。提问前请参考指南。您需要创建一个复制问题的代码的最小工作示例。为此,您需要提供数据和代码的其余部分。@gaussian先前的答案将根据需要将其设为1维。但研究论文仍然会有一些细节,比如如何处理LSTM&最后一次隐藏是否足够。我看不到那张纸。哇,真管用。非常感谢您的解决方案