Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/359.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python BaseCollectiveExecutor::StartPort超出范围:序列结束_Python_Tensorflow_Tf.keras - Fatal编程技术网

Python BaseCollectiveExecutor::StartPort超出范围:序列结束

Python BaseCollectiveExecutor::StartPort超出范围:序列结束,python,tensorflow,tf.keras,Python,Tensorflow,Tf.keras,我是python新手,所以我一直在关注一个聊天机器人的教程(从我必须处理的错误数量以及有多少其他人在这个网站上与之抗争来看,这并不是最棒的)。它使用tflearn,并且(经过大量故障排除后)一开始工作得非常好,但我想将tensorflow和所有内容更新到最新版本,因此在这里得到更多帮助后,我将其切换到keras。经过更多的故障排除之后,它工作了,但是在输入和输出之间出现了警告。我使用tf日志记录集verbosity成功地消除了其中一个警告,但另一个警告不会消失 2020-03-23 09:43:

我是python新手,所以我一直在关注一个聊天机器人的教程(从我必须处理的错误数量以及有多少其他人在这个网站上与之抗争来看,这并不是最棒的)。它使用tflearn,并且(经过大量故障排除后)一开始工作得非常好,但我想将tensorflow和所有内容更新到最新版本,因此在这里得到更多帮助后,我将其切换到keras。经过更多的故障排除之后,它工作了,但是在输入和输出之间出现了警告。我使用tf日志记录集verbosity成功地消除了其中一个警告,但另一个警告不会消失

2020-03-23 09:43:55.050983:W tensorflow/core/common_runtime/base_collective_executor.cc:217]BaseCollective executor::StartPort超出范围:序列结束 [{{node IteratorGetNext}}]]

我的代码

import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()

import numpy
import tensorflow
tensorflow.compat.v1.logging.set_verbosity(tensorflow.compat.v1.logging.ERROR)
import tensorflow.keras
import random
import json
import pickle

with open("C:\\Users\\School\\Documents\\python\\AI\\VIVA.json") as file:
    data = json.load(file)

words = []
labels = []
docs_x = []
docs_y = []

for intent in data["intents"]:
    for pattern in intent["patterns"]:
        wrds = nltk.word_tokenize(pattern)
        words.extend(wrds)
        docs_x.append(wrds)
        docs_y.append(intent["tag"])

        if intent["tag"] not in labels:
            labels.append(intent["tag"])

words = [stemmer.stem(w.lower()) for w in words if w != "?"]
words = sorted(list(set(words)))
labels = sorted(labels)

training = []
output = []

out_empty = [0 for _ in range(len(labels))]

for x, doc in enumerate(docs_x):
    bag = []

    wrds = [stemmer.stem(w) for w in doc]

    for w in words:
        if w in wrds:
            bag.append(1)
        else:
            bag.append(0)
    output_row = out_empty[:]
    output_row[labels.index(docs_y[x])] = 1

    training.append(bag)
    output.append(output_row)

training = numpy.array(training)
output = numpy.array(output)

model = tensorflow.keras.Sequential([
    tensorflow.keras.layers.Dense(8, input_shape=(len(training[0]),)),
    tensorflow.keras.layers.Dense(8),
    tensorflow.keras.layers.Dense(len(output[0]), activation="softmax"),
])

model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
model.fit(training, output, epochs=150, batch_size=None)

def bag_of_words(s, words):
    bag = [0 for _ in range(len(words))]

    s_words = nltk.word_tokenize(s)
    s_words = [stemmer.stem(word.lower()) for word in s_words]

    for se in s_words:
        for i, w in enumerate(words):
            if w == se:
                bag[i] = 1

    return numpy.array(bag)

def chat():
    print("Ready! Type quit to leave.")
    while True:
        inp = input(">>>")
        if inp.lower() == "quit":
            break

        results = model.predict([[bag_of_words(inp, words)]])
        results_index = numpy.argmax(results)
        tag = labels[results_index]

        for tg in data["intents"]:
            if tg['tag'] == tag:
                responses = tg['responses']

        print(random.choice(responses))

chat()
json是这种格式的字典

{"intents": [
        {"tag": "greeting",
         "patterns": ["hi", "hey", "is anyone there?", "hello", "good day", "whats up", "sup", "Viv", "VIVA"],
         "responses": ["Hello!", "Hi!", "Hey!"],
         "context_set": ""
       }
  ]
}

我知道这里有很多关于这个错误的答案,但是没有一个解决方案与我正在做的工作相匹配。我已经尝试了我所看到的一切,但因为我没有使用tf.data或任何东西,所以一切都不起作用。很抱歉,如果我在格式或其他方面做了任何错误,我只是用这个帐户来问这个问题,因为我是一个非常绝望的hs学生,在我真正应该上课的时候,它占用了我很多时间。

你找到问题的原因了吗?