Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/292.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 对于朴素贝叶斯情绪分析,0.7-0.75的准确度是否可以接受?_Python_Sentiment Analysis_Naivebayes - Fatal编程技术网

Python 对于朴素贝叶斯情绪分析,0.7-0.75的准确度是否可以接受?

Python 对于朴素贝叶斯情绪分析,0.7-0.75的准确度是否可以接受?,python,sentiment-analysis,naivebayes,Python,Sentiment Analysis,Naivebayes,我为发布这么多代码提前道歉 我试图将YouTube评论分为包含意见的评论(无论是正面的还是负面的)和不使用NLTK的Naive Bayes分类器的评论,但无论我在预处理阶段做什么,我都无法真正获得高于0.75的准确度。与我所看到的其他示例相比,这似乎有点低——例如,教程最终的精度大约为0.98 这是我的全部代码 import nltk, re, json, random from nltk.stem.wordnet import WordNetLemmatizer from nltk.corp

我为发布这么多代码提前道歉

我试图将YouTube评论分为包含意见的评论(无论是正面的还是负面的)和不使用NLTK的Naive Bayes分类器的评论,但无论我在预处理阶段做什么,我都无法真正获得高于0.75的准确度。与我所看到的其他示例相比,这似乎有点低——例如,教程最终的精度大约为0.98

这是我的全部代码

import nltk, re, json, random

from nltk.stem.wordnet import WordNetLemmatizer
from nltk.corpus import stopwords
from nltk.tag import pos_tag
from nltk.tokenize import TweetTokenizer
from nltk import FreqDist, classify, NaiveBayesClassifier

from contractions import CONTRACTION_MAP
from abbreviations import abbrev_map
from tqdm.notebook import tqdm

def expand_contractions(text, contraction_mapping=CONTRACTION_MAP):
    text = re.sub(r"’", "'", text)
    if text in abbrev_map:
        return(abbrev_map[text])
    text = re.sub(r"\bluv", "lov", text)
    
    contractions_pattern = re.compile('({})'.format('|'.join(contraction_mapping.keys())), 
                                      flags=re.IGNORECASE|re.DOTALL)
    def expand_match(contraction):
        match = contraction.group(0)
        first_char = match[0]
        expanded_contraction = contraction_mapping.get(match)\
                                if contraction_mapping.get(match)\
                                else contraction_mapping.get(match.lower())                       
        expanded_contraction = first_char+expanded_contraction[1:]
        return expanded_contraction
        
    expanded_text = contractions_pattern.sub(expand_match, text)
    return expanded_text

def reduce_lengthening(text):
    pattern = re.compile(r"(.)\1{2,}")
    return pattern.sub(r"\1\1", text)

def lemmatize_sentence(tokens):
    lemmatizer = WordNetLemmatizer()
    lemmatized_sentence = []
    for word, tag in pos_tag(tokens):
        if tag.startswith('NN'):
            pos = 'n'
        elif tag.startswith('VB'):
            pos = 'v'
        else:
            pos = 'a'
        lemmatized_sentence.append(lemmatizer.lemmatize(word, pos))
    return lemmatized_sentence

def processor(comments_list):
    
    new_comments_list = []
    for com in tqdm(comments_list):
        com = com.lower()
        
        #expand out contractions
        tok = com.split(" ")
        z = []
        for w in tok:
            ex_w = expand_contractions(w)
            z.append(ex_w)
        st = " ".join(z)
        
        
        tokenized = tokenizer.tokenize(st)
        reduced = [reduce_lengthening(token) for token in tokenized]
        new_comments_list.append(reduced)
        
    lemmatized = [lemmatize_sentence(new_com) for new_com in new_comments_list]
    
    return(lemmatized)

def get_all_words(cleaned_tokens_list):
    for tokens in cleaned_tokens_list:
        for token in tokens:
            yield token

def get_comments_for_model(cleaned_tokens_list):
    for comment_tokens in cleaned_tokens_list:
        yield dict([token, True] for token in comment_tokens)
        
if __name__ == "__main__":
    #=================================================================================~
    tokenizer = TweetTokenizer(strip_handles=True, reduce_len=True)        
    
    with open ("english_lang/samples/training_set.json", "r", encoding="utf8") as f:
        train_data = json.load(f)
        
    pos_processed = processor(train_data['pos'])
    neg_processed = processor(train_data['neg'])
    neu_processed = processor(train_data['neu'])
    
    emotion = pos_processed + neg_processed
    random.shuffle(emotion)
    
    em_tokens_for_model = get_comments_for_model(emotion)
    neu_tokens_for_model = get_comments_for_model(neu_processed)

    em_dataset = [(comment_dict, "Emotion")
                         for comment_dict in em_tokens_for_model]

    neu_dataset = [(comment_dict, "Neutral")
                             for comment_dict in neu_tokens_for_model]

    dataset = em_dataset + neu_dataset


    random.shuffle(dataset)
    x = 700
    tr_data = dataset[:x]
    te_data = dataset[x:]
    classifier = NaiveBayesClassifier.train(tr_data)
    print(classify.accuracy(classifier, te_data))
如果需要,我可以发布我的培训数据集,但可能值得一提的是,YouTube评论本身的英语质量非常差,而且前后不一致(我想这就是模型准确性低的原因)。在任何情况下,这是否被视为可接受的准确度? 或者,我很可能完全搞错了,有一个非常好的模型可以使用,在这种情况下,请随意告诉我我是个白痴!
提前感谢

将您的结果与无关教程的结果进行比较在统计学上是无效的。在你恐慌之前,请对可能降低模型准确性的因素进行适当的研究。首先也是最重要的一点是,您的模型不能表现出比数据集信息中固有的精度更高的精度。例如,无论数据集如何,没有一个模型(从长远来看)在预测随机二进制事件方面的表现能超过50%


我们没有合理的方法来评估理论信息内容。如果需要检查,请尝试将其他一些模型类型应用于相同的数据,并查看它们产生的准确性。运行这些实验是数据科学的一个正常部分。

这个问题可能会在网上找到更好的答案,因此如果您在这里没有找到答案,请记住这一点。好的,谢谢您的回答