Python 2.7 在NLTK中使用我自己的语料库而不是电影审查语料库进行分类

Python 2.7 在NLTK中使用我自己的语料库而不是电影审查语料库进行分类,python-2.7,nlp,classification,nltk,corpus,Python 2.7,Nlp,Classification,Nltk,Corpus,我使用下面的代码并从表单中获取 输出: 0.655 Most Informative Features bad = True neg : pos = 2.0 : 1.0 script = True neg : pos = 1.5 : 1.0 world = True pos : neg =

我使用下面的代码并从表单中获取

输出:

0.655
Most Informative Features
                 bad = True              neg : pos    =      2.0 : 1.0
              script = True              neg : pos    =      1.5 : 1.0
               world = True              pos : neg    =      1.5 : 1.0
             nothing = True              neg : pos    =      1.5 : 1.0
                 bad = False             pos : neg    =      1.5 : 1.0

我想在nltk中创建我自己的文件夹,而不是
movie\u reviews
,并将我自己的文件放入其中。

如果您的数据与nltk中的
movie\u review
语料库的结构完全相同,有两种方法可以“破解”您的路径:

1。将语料库目录放入保存
nltk.data

首先检查
nltk.数据的保存位置:

>>> import nltk
>>> nltk.data.find('corpora/movie_reviews')
FileSystemPathPointer(u'/home/alvas/nltk_data/corpora/movie_reviews')
然后将目录移动到保存
nltk_数据/corba
的位置:

# Let's make a test corpus like `nltk.corpus.movie_reviews`
~$ mkdir my_movie_reviews
~$ mkdir my_movie_reviews/pos
~$ mkdir my_movie_reviews/neg
~$ echo "This is a great restaurant." > my_movie_reviews/pos/1.txt
~$ echo "Had a great time at chez jerome." > my_movie_reviews/pos/2.txt
~$ echo "Food fit for the ****" > my_movie_reviews/neg/1.txt
~$ echo "Slow service." > my_movie_reviews/neg/2.txt
~$ echo "README please" > my_movie_reviews/README
# Move it to `nltk_data/corpora/`
~$ mv my_movie_reviews/ nltk_data/corpora/
在python代码中:

>>> import string
>>> from nltk.corpus import LazyCorpusLoader, CategorizedPlaintextCorpusReader
>>> from nltk.corpus import stopwords
>>> my_movie_reviews = LazyCorpusLoader('my_movie_reviews', CategorizedPlaintextCorpusReader, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
>>> mr = my_movie_reviews
>>>
>>> stop = stopwords.words('english')
>>> documents = [([w for w in mr.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
>>> for i in documents:
...     print i
... 
([u'Food', u'fit', u'****'], u'neg')
([u'Slow', u'service'], u'neg')
([u'great', u'restaurant'], u'pos')
([u'great', u'time', u'chez', u'jerome'], u'pos')
(有关详细信息,请参阅和)

2。创建自己的
CategorizedPlaintextCorpusReader

如果您无法访问
nltk.data
目录,并且希望使用自己的语料库,请尝试以下操作:

# Let's say that your corpus is saved on `/home/alvas/my_movie_reviews/`

>>> import string; from nltk.corpus import stopwords
>>> from nltk.corpus import CategorizedPlaintextCorpusReader
>>> mr = CategorizedPlaintextCorpusReader('/home/alvas/my_movie_reviews', r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
>>> stop = stopwords.words('english')
>>> documents = [([w for w in mr.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
>>> 
>>> for doc in documents:
...     print doc
... 
([u'Food', u'fit', u'****'], 'neg')
([u'Slow', u'service'], 'neg')
([u'great', u'restaurant'], 'pos')
([u'great', u'time', u'chez', u'jerome'], 'pos')

类似的问题已经在和上被提出


以下是可以工作的完整代码:

import string
from itertools import chain

from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier as nbc
from nltk.corpus import CategorizedPlaintextCorpusReader
import nltk

mydir = '/home/alvas/my_movie_reviews'

mr = CategorizedPlaintextCorpusReader(mydir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
stop = stopwords.words('english')
documents = [([w for w in mr.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]

numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents[numtrain:]]

classifier = nbc.train(train_set)
print nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features(5)

你的文件夹看起来怎么样?你能在文件夹中发布文件的片段吗?或者到您的数据集的链接?它完全类似于
movie_reviews
文件夹,并且有
pos
neg
文件夹。但是我自己选择了
.txt
文件的内容,我希望得到答案helps@alvas是的,很有帮助。谢谢。你能回答这个问题吗:在那后面加上其他几行就行了=)。在
documents=…
行后面使用完全相同的代码。您能解释一下您的意思吗?我不明白。要获得最多的信息功能,请使用
分类器。显示最多的信息功能(5)
非常感谢。它起作用了。现在我试着理解代码;)@alvas我在word_features=word_features.keys()[:100]行中遇到一个错误,它说TypeError:“dict_keys”对象不可订阅。可能的原因是什么?
import string
from itertools import chain

from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier as nbc
from nltk.corpus import CategorizedPlaintextCorpusReader
import nltk

mydir = '/home/alvas/my_movie_reviews'

mr = CategorizedPlaintextCorpusReader(mydir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')
stop = stopwords.words('english')
documents = [([w for w in mr.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]

numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag  in documents[numtrain:]]

classifier = nbc.train(train_set)
print nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features(5)