Python NLTK问题:名称';保存文件';没有定义

Python NLTK问题:名称';保存文件';没有定义,python,nltk,nltk-trainer,Python,Nltk,Nltk Trainer,我如何解决这个问题 from nltk.sentiment.util import demo_sent_subjectivity sentence='I like her shoes' demo_sent_subjectivity(sentence) 名称错误:未定义名称“保存文件” demo_sent_主观性(text)是nltk.touction.utils中的一个方法,然后调用另一个方法demo_主观性(trainer,save_analyzer=False,n_instances=No

我如何解决这个问题

from nltk.sentiment.util import demo_sent_subjectivity
sentence='I like her shoes'
demo_sent_subjectivity(sentence)
名称错误:未定义名称“保存文件”

demo_sent_主观性(text)是nltk.touction.utils中的一个方法,然后调用另一个方法demo_主观性(trainer,save_analyzer=False,n_instances=None,output=None)

这些方法如下:

def demo_sent_subjectivity(text):
    """
    Classify a single sentence as subjective or objective using a stored
    SentimentAnalyzer.

    :param text: a sentence whose subjectivity has to be classified.
    """
    from nltk.classify import NaiveBayesClassifier
    from nltk.tokenize import regexp

    word_tokenizer = regexp.WhitespaceTokenizer()
    try:
        sentim_analyzer = load("sa_subjectivity.pickle")
    except LookupError:
        print("Cannot find the sentiment analyzer you want to load.")
        print("Training a new one using NaiveBayesClassifier.")
        sentim_analyzer = demo_subjectivity(NaiveBayesClassifier.train,         
        True)

    # Tokenize and convert to lower case
    tokenized_text = [word.lower() for word in     
    word_tokenizer.tokenize(text)]
    print(sentim_analyzer.classify(tokenized_text))


def demo_subjectivity(trainer, save_analyzer=False, n_instances=None, output=None):
    from nltk.sentiment import SentimentAnalyzer
    from nltk.corpus import subjectivity

    if n_instances is not None:
        n_instances = int(n_instances / 2)

    subj_docs = [
        (sent, "subj") for sent in subjectivity.sents(categories="subj")    
    [:n_instances]
    ]
    obj_docs = [
        (sent, "obj") for sent in subjectivity.sents(categories="obj") 
       [:n_instances]
    ]


    train_subj_docs, test_subj_docs = split_train_test(subj_docs)
    train_obj_docs, test_obj_docs = split_train_test(obj_docs)

    training_docs = train_subj_docs + train_obj_docs
    testing_docs = test_subj_docs + test_obj_docs

    sentim_analyzer = SentimentAnalyzer()
    all_words_neg = sentim_analyzer.all_words(
        [mark_negation(doc) for doc in training_docs]
    )


    unigram_feats = sentim_analyzer.unigram_word_feats(all_words_neg, 
    min_freq=4)
    sentim_analyzer.add_feat_extractor(extract_unigram_feats,     
    unigrams=unigram_feats)


    training_set = sentim_analyzer.apply_features(training_docs)
    test_set = sentim_analyzer.apply_features(testing_docs)

    classifier = sentim_analyzer.train(trainer, training_set)
    try:
        classifier.show_most_informative_features()
    except AttributeError:
        print(
        "Your classifier does not provide a     
        show_most_informative_features() method."
        )
    results = sentim_analyzer.evaluate(test_set)

    if save_analyzer == True:
        save_file(sentim_analyzer, "sa_subjectivity.pickle")

    if output:
        extr = [f.__name__ for f in sentim_analyzer.feat_extractors]
        output_markdown(
            output,
            Dataset="subjectivity",
            Classifier=type(classifier).__name__,
            Tokenizer="WhitespaceTokenizer",
            Feats=extr,
            Instances=n_instances,
            Results=results,
        )

    return sentim_analyzer

save_file方法称为demo_方法,但我不明白,它的源代码在哪里。我注意到,在MountainAnalyzer类中存在save_file方法,但为什么在这里它像save_file一样被调用,而不是sentim_analyzer.save_file?

Name错误只是意味着您试图使用一个未定义的变量。由于
save_file
没有出现在您提供的代码中,因此很难准确计算出您是如何得到此错误的。请提供导致错误的代码的详细信息以及错误的完整回溯。非常感谢您的编辑!我添加了这些方法的源代码名称错误只是意味着您试图使用尚未定义的变量。由于
save_file
没有出现在您提供的代码中,因此很难准确计算出您是如何得到此错误的。请提供导致错误的代码的详细信息以及错误的完整回溯。非常感谢您的编辑!我添加了这些方法的源代码