Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/kubernetes/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Nlp 使用LSTM进行情绪分析(模型未产生良好输出)_Nlp_Nltk_Lstm - Fatal编程技术网

Nlp 使用LSTM进行情绪分析(模型未产生良好输出)

Nlp 使用LSTM进行情绪分析(模型未产生良好输出),nlp,nltk,lstm,Nlp,Nltk,Lstm,我使用LSTM建立了一个情绪分析模型,但我的模型给出了非常糟糕的预测 我的LSTM模型如下所示: def ltsm_model(input_shape, word_to_vec_map, word_to_index): """ Function creating the ltsm_model model's graph. Arguments: input_shape -- shape of the input, usually (max_len,) word_

我使用LSTM建立了一个情绪分析模型,但我的模型给出了非常糟糕的预测

我的LSTM模型如下所示:

def ltsm_model(input_shape, word_to_vec_map, word_to_index):
"""
Function creating the ltsm_model model's graph.

Arguments:
input_shape -- shape of the input, usually (max_len,)
word_to_vec_map -- dictionary mapping every word in a vocabulary into its 50-dimensional vector representation
word_to_index -- dictionary mapping from words to their indices in the vocabulary (400,001 words)

Returns:
model -- a model instance in Keras
"""

### START CODE HERE ###
# Define sentence_indices as the input of the graph, it should be of shape input_shape and dtype 'int32' (as it contains indices).
sentence_indices =  Input(shape=input_shape, dtype='int32')

# Create the embedding layer pretrained with GloVe Vectors (≈1 line)
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)

# Propagate sentence_indices through your embedding layer, you get back the embeddings
embeddings = embedding_layer(sentence_indices)   

# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a batch of sequences.
X = LSTM(128, return_sequences=True)(embeddings)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
# Be careful, the returned output should be a single hidden state, not a batch of sequences.
X = LSTM(128, return_sequences=False)(X)
# Add dropout with a probability of 0.5
X = Dropout(0.5)(X)
# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
X = Dense(2, activation='relu')(X)
# Add a softmax activation
X = Activation('softmax')(X)

# Create Model instance which converts sentence_indices into X.
model = Model(inputs=[sentence_indices], outputs=X)

### END CODE HERE ###

return model
以下是我的培训数据集的外观:

这是我的测试数据:

x_test = np.array(['amazing!: this soundtrack is my favorite music..'])
X_test_indices = sentences_to_indices(x_test, word_to_index, maxLen)
print(x_test[0] +' '+  str(np.argmax(model.predict(X_test_indices))))
为此,我得到了以下信息:

太棒了这首配乐是我最喜欢的音乐。。0

但这应该是积极的情绪,应该是1

这也是我的拟合模型输出:


如何提高我的模型性能?我想这是非常糟糕的型号。

有什么帮助吗?有什么帮助吗?