Machine learning 为什么我的神经网络序列模型从一开始就达到了0.9998的精度?

Machine learning 为什么我的神经网络序列模型从一开始就达到了0.9998的精度?,machine-learning,keras,deep-learning,nlp,lstm,Machine Learning,Keras,Deep Learning,Nlp,Lstm,我正在为twitter媒体帖子构建一个标签推荐模型,该模型以tweet文本为输入,在其上嵌入300维单词,并将其分类为198个标签类。当我运行我的模型时,我从一开始就获得了0.9998的精度,以后不会改变!我的模型出了什么问题 import numpy as np import pickle from keras.layers.normalization import BatchNormalization from keras.models import Sequential, load_mod

我正在为twitter媒体帖子构建一个标签推荐模型,该模型以tweet文本为输入,在其上嵌入300维单词,并将其分类为198个标签类。当我运行我的模型时,我从一开始就获得了0.9998的精度,以后不会改变!我的模型出了什么问题

import numpy as np
import pickle
from keras.layers.normalization import BatchNormalization
from keras.models import Sequential, load_model
from keras.layers import Dense, Dropout, Activation,LSTM, Embedding
from keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
from keras import regularizers, initializers
package="2018_pickle"
with open(path1,"rb") as f:
    maxLen,l_h2i,l_w2i=pickle.load(f)
with open(path2,"rb") as f:
    X_train,X_test,X_train_indices,X_test_indices=pickle.load(f)
with open(path3,"rb") as f:
    Y_train,Y_test,Y_train_oh,Y_test_oh=pickle.load(f)
with open(path4,"rb") as f:
    emd_matrix=pickle.load(f)


if __name__ == '__main__':
modelname="model_1"
train=False
vocab_size = len(emd_matrix)
emd_dim=emd_matrix.shape[1]
if train:
    model = Sequential()
    model.add(Embedding(vocab_size , emd_dim, weights=[emd_matrix]
                        ,input_length=maxLen,trainable=False))
    model.add(LSTM(256,return_sequences=True,activation="relu",
                   kernel_regularizer=regularizers.l2(0.01),
                   kernel_initializer=initializers.glorot_normal(seed=None)))
    model.add(LSTM(256,return_sequences=True,activation="relu",
                   kernel_regularizer=regularizers.l2(0.01),
                   kernel_initializer=initializers.glorot_normal(seed=None)))
    model.add(LSTM(256,return_sequences=False,activation="relu",
                   kernel_regularizer=regularizers.l2(0.01),
                   kernel_initializer=initializers.glorot_normal(seed=None)))
    model.add(Dense(198,activation='softmax'))
    model.compile(loss='binary_crossentropy', optimizer='adam',
                  metrics=['accuracy'])
    checkpoint = ModelCheckpoint(filepath, monitor="loss",
                                 verbose=1, save_best_only=True, mode='min')
    reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.5,
                                  patience=2, min_lr=0.000001)
    history=model.fit(X_train_indices, Y_train_oh, batch_size=2048,
                      epochs=5, validation_split=0.1, shuffle=True,
                      callbacks=[checkpoint, reduce_lr])


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_10 (Embedding)     (None, 54, 300)           22592100  
_________________________________________________________________
lstm_18 (LSTM)               (None, 54, 256)           570368    
_________________________________________________________________
lstm_19 (LSTM)               (None, 54, 256)           525312    
_________________________________________________________________
lstm_20 (LSTM)               (None, 256)               525312    
_________________________________________________________________
dense_7 (Dense)              (None, 198)               50886     
=================================================================
Total params: 24,263,978
Trainable params: 1,671,878
Non-trainable params: 22,592,100
_________________________________________________________________

这很可能是由于在多类分类问题中错误地使用了loss='binary_crossentropy',有关详细信息,请参阅

您应该将模型编译更改为

model.compile(loss='categorical_crossentropy', optimizer='adam',
                  metrics=['accuracy'])

谢谢你的回答。我以这种方式改变了它,现在还有另一个问题,精度低于0.001@desertnaut@raminkarimian这是您对该模型和数据的真实准确性;您现在报告的内容是另一个问题,因此请接受答案,并用新问题打开一个新问题