Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/364.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python keras lstm-seq2seq-chatbot。培训不起作用,预测出一些错误。无论我输入什么,我都会得到相同的答复_Python_Keras_Lstm_Recurrent Neural Network_Seq2seq - Fatal编程技术网

Python keras lstm-seq2seq-chatbot。培训不起作用,预测出一些错误。无论我输入什么,我都会得到相同的答复

Python keras lstm-seq2seq-chatbot。培训不起作用,预测出一些错误。无论我输入什么,我都会得到相同的答复,python,keras,lstm,recurrent-neural-network,seq2seq,Python,Keras,Lstm,Recurrent Neural Network,Seq2seq,我已经建立了一个基于seq2seq的聊天机器人。我使用的coupus是来自 大约20000个语料库我用来训练我的模型。经过300个纪元,损失约为0.02。但最后,当我输入一个随机问题,如“你要去哪里?”或“你叫什么名字”或其他什么时,我得到了相同的答案“It”。如你所见,无论我输入什么,我总是得到一个单词“It”。我发现,当我使用np.argmax计算预测的概率分布时,每次我都得到相同的索引“4”,这意味着下一个单词的索引 此外,我还发现编码器模型预测的状态h和状态c有一些非正常数据。来自c状态

我已经建立了一个基于seq2seq的聊天机器人。我使用的coupus是来自 大约20000个语料库我用来训练我的模型。经过300个纪元,损失约为0.02。但最后,当我输入一个随机问题,如“你要去哪里?”或“你叫什么名字”或其他什么时,我得到了相同的答案“It”。如你所见,无论我输入什么,我总是得到一个单词“It”。我发现,当我使用np.argmax计算预测的概率分布时,每次我都得到相同的索引“4”,这意味着下一个单词的索引

此外,我还发现编码器模型预测的状态h和状态c有一些非正常数据。来自c状态的最大概率大于16

embed_layer = Embedding(input_dim=vocab_size, output_dim=50, trainable=False)
embed_layer.build((None,))
embed_layer.set_weights([embedding_matrix])

LSTM_cell = LSTM(300, return_state=True)
LSTM_decoder = LSTM(300, return_sequences=True, return_state=True)

dense = TimeDistributed(Dense(vocab_size, activation='softmax'))

#encoder输入 与 decoder输入
input_context = Input(shape=(maxLen, ), dtype='int32', name='input_context')
input_target = Input(shape=(maxLen, ), dtype='int32', name='input_target')

input_context_embed = embed_layer(input_context)
input_target_embed = embed_layer(input_target)

_, context_h, context_c = LSTM_cell(input_context_embed)
decoder_lstm, _, _ = LSTM_decoder(input_target_embed, 
                                  initial_state=[context_h, context_c])

output = dense(decoder_lstm)

model = Model([input_context, input_target], output)

model.compile(optimizer='adam', loss='categorical_crossentropy', 
              metrics=['accuracy'])
model.summary()

model.fit([context_, final_target_], outs, epochs=1, batch_size=128, validation_split=0.2)

我的意见: 你叫什么名字? [“什么”、“是”、“你的”、“名字”和“?”] [[0 0 0 0 0 0 0 0 218 85 20 206 22]]

我得到的是:
两天后,我知道了原因 我的word2idx与idx2word的反面不同 谢谢每一个看到这个的人

input_context = Input(shape=(maxLen,), dtype='int32', name='input_context')
input_target = Input(shape=(maxLen,), dtype='int32', name='input_target')

input_ctx_embed = embed_layer(input_context)
input_tar_embed = embed_layer(input_target)

_, context_h, context_c = LSTM_cell(input_ctx_embed)
decoder_lstm, _, _ = LSTM_decoder(input_tar_embed, 
                                  initial_state=[context_h, context_c])
output = dense(decoder_lstm)

context_model = Model(input_context, [context_h, context_c])

target_h = Input(shape=(300,))
target_c = Input(shape=(300,))

target, h, c = LSTM_decoder(input_tar_embed, initial_state=[target_h, target_c])
output = dense(target)

target_model = Model([input_target, target_h, target_c], [output, h, c])


maxlen = 12
with open('reverse_dictionary.pkl', 'rb') as f:
    index_to_word = pickle.load(f)

question = "what is your name?"
# question = "where are you going?"
print(question)
a = question.split()
for pos, i in enumerate(a):
    a[pos] = re.sub('[^a-zA-Z0-9 .,?!]', '', i)
    a[pos]= re.sub(' +', ' ', i)
    a[pos] = re.sub('([\w]+)([,;.?!#&\'\"-]+)([\w]+)?', r'\1 \2 \3', i)
    if len(i.split()) > maxlen:
            a[pos] = (' ').join(a[pos].split()[:maxlen])
            if '.' in a[pos]:
                ind = a[pos].index('.')
                a[pos] = a[pos][:ind+1]
            if '?' in a[pos]:
                ind = a[pos].index('?')
                a[pos] = a[pos][:ind+1]
            if '!' in a[pos]:
                ind = a[pos].index('!')
                a[pos] = a[pos][:ind+1]

question = ' '.join(a).split()
print(question)

question = np.array([word_to_index[w] for w in question])
question = sequence.pad_sequences([question], maxlen=maxLen)
#                                   padding='post', truncating='post')
print(question)

question_h, question_c = context_model.predict(question)

answer = np.zeros([1, maxLen])
answer[0, -1] = word_to_index['BOS']
'''
i keeps track of the length of the generated answer. 
This won't allow the model to genrate sequences with more than 20 words.
'''
i=1

answer_ = []
flag = 0

while flag != 1:
    prediction, prediction_h, prediction_c = target_model.predict([
        answer, question_h, question_c
    ])
#     print(prediction[0,-1,4])
    word_arg = np.argmax(prediction[0, -1, :]) #
#     print(word_arg)
    answer_.append(index_to_word[word_arg])

    if word_arg == word_to_index['EOS'] or i > 20:
        flag = 1
    answer = np.zeros([1, maxLen])
    answer[0, -1] = word_arg
    question_h = prediction_h
    question_c = prediction_c
    i += 1

print(' '.join(answer_))