Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 智能聊天机器人没有从数据中学习_Python_Tensorflow - Fatal编程技术网

Python 智能聊天机器人没有从数据中学习

Python 智能聊天机器人没有从数据中学习,python,tensorflow,Python,Tensorflow,我根据youtube教程构建了这个聊天机器人,他的工作很好。它需要一个短语,寻找关键词,并将它们与一袋单词进行比较,以确定可能的反应 代码如下: from nltk.stem.lancaster import LancasterStemmer stemmer = LancasterStemmer() import numpy import tflearn import tensorflow as tf import warnings warnings.filterwarnings("

我根据youtube教程构建了这个聊天机器人,他的工作很好。它需要一个短语,寻找关键词,并将它们与一袋单词进行比较,以确定可能的反应

代码如下:

from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()

import numpy
import tflearn 
import tensorflow as tf
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import random 
import json 
import pickle

with open("intents.json") as file:
    data = json.load(file)

#try and use saved and processed data before having to run all this again.
# try:
#     with open("data.pickle", "rb") as f:
#         #save these four variables into pickle file and load in these lists
#         words, labels, training, output = pickle.load(f)
# except:
    #all words in patterns in this list
    words = []
    #all labels in this list
    labels = []
    #list of all patterns
    docs_x = []
    #tag for each pattern in doc x
    docs_y = []

    for intent in data["intents"]:
        for pattern in intent["patterns"]:
            wrds = nltk.word_tokenize(pattern)
            words.extend(wrds)
            docs_x.append(wrds) #append tokenized words
            docs_y.append(intent["tag"])

        if intent["tag"] not in labels:
            labels.append(intent["tag"])

    #This code will simply create a unique list of stemmed words to use in the next step of our data preprocessing.
    words = [stemmer.stem(w.lower()) for w in words if w != "?"]
    words = sorted(list(set(words)))
    # set removes duplicates, list conversts the set back into a list, and sorted sorts it

    labels = sorted(labels)

    #neural networks don't understand strings, only numbers, so we're going to 
    # create a 'bag' of words that represent the words in any given pattern
    # and we'll use it to train out model
    # a bag of words will become a series of zeros and ones equal to the (could be non zero or 1)
    # length of the amount of words we have
    #each position in the list will represent if a word exists or not.
    #it finds frequancy of words and puts a list representing the frequency of each word in the list
    # helps determines which words are there and which are not


    # we will create output lists which are the length of the amount of labels/tags 
    # we have in our dataset. Each position in the list will represent one distinct 
    # label/tag, a 1 in any of those positions will show which label/tag is represented.
    training = []
    output = []

    out_empty = [0 for _ in range(len(labels))]

    for x, doc in enumerate(docs_x):
        bag = []

        wrds = [stemmer.stem(w) for w in doc]
        #checking for input word in the pattern we are looping through
        for w in words:
            if w in wrds:
                bag.append(1)
            else:
                bag.append(0)

    output_row = out_empty[:]
    #look through labels list, see where tag is in list, and set that value to 1 in the output row
    output_row[labels.index(docs_y[x])] = 1

    #append list that has the bag
    training.append(bag)
    #append the output row
    output.append(output_row)
    #now we have two lists. training list has bags of words. and outputs has also a list of zeros and ones

    #turn these into numpy arrays
    training = numpy.array(training)
    output = numpy.array(output)

    with open("data.pickle", "wb") as f:
        #write these variables into a pickle file so we can save it
        pickle.dump((words, labels, training, output), f)

#WORD STEMMING
# Stemming a word is attempting to find the root of the word. For example, 
# the word "thats" stem might be "that" and the word "happening" would have the 
# stem of "happen". We will use this process of stemming words to reduce the 
# vocabulary of our model and attempt to find the more general meaning behind 
# sentences.

#reset underlying data graph
tf.compat.v1.reset_default_graph()

#define the input shape we are expecting for out model. length of our training data is amount of neurons
net = tflearn.input_data(shape=[None, len(training[0])])
#add fully connected layer to our network hidden layer. 2 hidden layers with 8 neurons.
#hidden layers form the weights for which words will equal each output. they do the work. with more intense tags you will want more layers
net = tflearn.fully_connected(net, 8)
net = tflearn.fully_connected(net, 8)
#connected to an output layer with neurons representing each of our classes
#get probabilities for each output. each softmax activation is a tag i.e. 'hello'
#model weighs probablity of it being each tag, and spits out the corresponding response
net = tflearn.fully_connected(net, len(output[0]), activation="softmax")
net = tflearn.regression(net)

#train the model. DNN is a type of neural network
model = tflearn.DNN(net)

# try:
#     model.load("model.tflearn")
# except:
    #pass the model our training data. number of epochs is the amount of times it will see the same data
model.fit(training, output, n_epoch=10000, batch_size=8, show_metric=True)
model.save("model.tflearn")

# function takes s and list of words
def bag_of_words(s, words):
    #create blank bag of words list
    bag = [0 for _ in range(len(words))]
    #get list of tokenized words and stem them
    s_words = nltk.word_tokenize(s)
    s_words = [stemmer.stem(word.lower()) for word in s_words]

    for se in s_words:
        for i, w in enumerate(words):
            #if current word in words list is equal to word in sentence
            if w == se:
                #add word to say it exists
                bag[i] = 1

    return numpy.array(bag)

def chat():
    print("Start talking with the bot! (type quit to stop)")
    while True:
        inp = input("You: ")
        if inp.lower() == "quit":
            break

        #now we turn input into a bag of words, feed it to model, and get model's response
        results = model.predict([bag_of_words(inp, words)])
        #output of above is only probabilities
        #get index in greatest probability in list:
        results_index = numpy.argmax(results)
        #give label that it thinks out message is:
        tag = labels[results_index]

        #loop the probable tag through each dictionary to find the associated one
        for tg in data["intents"]:
            #found the corresponding tag:
            if tg['tag'] == tag:
                #define responses as the responses in the associated tag
                responses = tg['responses']

        print(random.choice(responses))

chat()
以下是它正在学习的数据:

{"intents": [
        {"tag": "greeting",
         "patterns": ["Hi", "How are you", "Is anyone there?", "Hello", "Good day"],
         "responses": ["Hello, thanks for visiting", "Good to see you again", "Hi there, how can I help?"],
         "context_set": ""
        },
        {"tag": "goodbye",
         "patterns": ["Bye", "See you later", "Goodbye"],
         "responses": ["See you later, thanks for visiting", "Have a nice day", "Bye! Come back again soon."]
        },
        {"tag": "thanks",
         "patterns": ["Thanks", "Thank you", "That's helpful"],
         "responses": ["Happy to help!", "Any time!", "My pleasure"]
        },
        {"tag": "hours",
         "patterns": ["What hours are you open?", "What are your hours?", "When are you open?" ],
         "responses": ["We're open every day 9am-9pm", "Our hours are 9am-9pm every day"]
        },
        {"tag": "payments",
         "patterns": ["Do you take credit cards?", "Do you accept Mastercard?", "Are you cash only?" ],
         "responses": ["We accept VISA, Mastercard and AMEX", "We accept most major credit cards"]
        },
        {"tag": "opentoday",
         "patterns": ["Are you open today?", "When do you open today?", "What are your hours today?"],
         "responses": ["We're open every day from 9am-9pm", "Our hours are 9am-9pm every day"]
        }
   ]
}
基本上,它总是认为无论我键入什么,我都在索要商店营业时间。我也尝试了不同长度的训练,但都没有效果。希望有人能告诉我为什么它不起作用


谢谢。

您应该添加教程的链接。什么是不工作。?你收到错误信息了吗?新数据文件在哪里?你有没有用新数据再次训练它?可能在旧模型存在时加载它。可能会删除带有模型的文件。您是否签入了加载正确文件的代码?也许它仍然使用相同的旧文件。您可以为此使用
print()
。您还可以使用
print()
查看它运行的代码行以及变量中包含的内容-可能它不再训练它或使用旧值。您只需调试它—在开始时,您可以使用
print()
进行调试。这被称为“打印调试”`