Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/313.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
bigrams-python的CountVectorize词汇规范_Python_Countvectorizer - Fatal编程技术网

bigrams-python的CountVectorize词汇规范

bigrams-python的CountVectorize词汇规范,python,countvectorizer,Python,Countvectorizer,我试图获得大量(约160000)文档的稀疏项计数矩阵 我清理了文本并希望在所有文档上循环(即,计数一次矢量化一个文档,并附加生成的1xN数组)。以下代码适用于逐字大小写,但不适用于bigram: cv1 = sklearn.feature_extraction.text.CountVectorizer(stop_words=None,vocabulary=dictionary1) cv2 = sklearn.feature_extraction.text.CountVectorizer(stop

我试图获得大量(约160000)文档的稀疏项计数矩阵

我清理了文本并希望在所有文档上循环(即,计数一次矢量化一个文档,并附加生成的1xN数组)。以下代码适用于逐字大小写,但不适用于bigram:

cv1 = sklearn.feature_extraction.text.CountVectorizer(stop_words=None,vocabulary=dictionary1)
cv2 = sklearn.feature_extraction.text.CountVectorizer(stop_words=None,vocabulary=dictionary2)

for row in range(start,end+1):
    report_name = fund_reports_table.loc[row, "report_names"]
    raw_report = open("F:/EDGAR_ShareholderReports/" + report_name, 'r', encoding="utf8").read()

    ## word for word
    temp = cv1.fit_transform([raw_report]).toarray()
    res1 = np.concatenate((res1,temp),axis=0)

    ## big grams
    bigram=set()
    sentences = raw_report.split(".")
    for line in sentences:
        token = nltk.word_tokenize(line)
        bigram = bigram.union(set(list(ngrams(token, 2)))  )

    temp = cv2.fit_transform(list(bigram)).toarray()
    res2=np.concatenate((res2,temp),axis=0)
Python返回

"AttributeError: 'tuple' object has no attribute 'lower'" 
大概是因为我将数据馈送到bigram vectorizecounter的方式无效

“原始报告”是一个字符串。逐字字典是:

dictionary1 =['word1', 'words2',...]
dictionary2类似,但基于通过合并所有文档的所有Bigram(并保留唯一值,在前一个文档中完成)构建的Bigram,从而生成的结构是

dictionary2 =[('word1','word2'),('wordn','wordm'),...]
文档bigram具有相同的结构,这就是为什么我对python不接受输入感到困惑的原因。是否有办法解决这个问题,或者我的整个方法不是很pythonic,并开始适得其反

提前感谢您的帮助


备注:我知道我可以用更精细的CountVectorize命令完成整个过程(即一步完成清理、标记化和计数),但我更希望自己能够完成这项工作(以便查看和存储中间输出)。另外,由于我使用了大量的文本,我恐怕会遇到内存问题。

您的问题来自这样一个事实,即您的字典2是基于元组的。下面是一个最简单的示例,它显示了当bi-gram是字符串时,这一方法是有效的。如果您想单独处理每个文件,可以将其传递给vectorizer.transform()作为一个列表

from sklearn.feature_extraction.text import CountVectorizer

Doc1 = 'Wimbledon is one of the four Grand Slam tennis tournaments, the others being the Australian Open, the French Open and the US Open.'
Doc2 = 'Since the Australian Open shifted to hardcourt in 1988, Wimbledon is the only major still played on grass'
doc_set = [Doc1, Doc2]

my_vocabulary= ['Grand Slam', 'Australian Open', 'French Open', 'US Open']

vectorizer = CountVectorizer(ngram_range=(2, 2))
vectorizer.fit_transform(my_vocabulary)
term_count = vectorizer.transform(doc_set)

# Show the index key for each bigram
vectorizer.vocabulary_
Out[11]: {'grand slam': 2, 'australian open': 0, 'french open': 1, 'us open': 3}

# Sparse matrix of bigram counts - each row corresponds to a document
term_count.toarray()
Out[12]: 
array([[1, 1, 1, 1],
       [1, 0, 0, 0]], dtype=int64)
您可以使用列表理解来修改词典2

dictionary2 = [('Grand', 'Slam'), ('Australian', 'Open'), ('French', 'Open'), ('US', 'Open')]
dictionary2 = [' '.join(tup) for tup in dictionary2]

dictionary2
Out[26]: ['Grand Slam', 'Australian Open', 'French Open', 'US Open']
编辑:基于以上内容,我认为您可以使用以下代码:

from sklearn.feature_extraction.text import CountVectorizer

# Modify dictionary2 to be compatible with CountVectorizer
dictionary2_cv = [' '.join(tup) for tup in dictionary2]

# Initialize and train CountVectorizer
cv2 = CountVectorizer(ngram_range=(2, 2))
cv2.fit_transform(dictionary2_cv)

for row in range(start,end+1):
    report_name = fund_reports_table.loc[row, "report_names"]
    raw_report = open("F:/EDGAR_ShareholderReports/" + report_name, 'r', encoding="utf8").read()

    ## word for word
    temp = cv1.fit_transform([raw_report]).toarray()
    res1 = np.concatenate((res1,temp),axis=0)

    ## big grams
    bigram=set()
    sentences = raw_report.split(".")
    for line in sentences:
        token = nltk.word_tokenize(line)
        bigram = bigram.union(set(list(ngrams(token, 2)))  )

    # Modify bigram to be compatible with CountVectorizer
    bigram = [' '.join(tup) for tup in bigram]

    # Note you must not fit_transform here - only transform using the trained cv2
    temp = cv2.transform(list(bigram)).toarray()
    res2=np.concatenate((res2,temp),axis=0)

您的问题来自这样一个事实,即您的字典2是基于元组的。下面是一个最简单的示例,它显示了当bi-gram是字符串时,这种方法是有效的。如果您想单独处理每个文件,可以将其作为列表传递给vectorizer.transform()

from sklearn.feature_extraction.text import CountVectorizer

Doc1 = 'Wimbledon is one of the four Grand Slam tennis tournaments, the others being the Australian Open, the French Open and the US Open.'
Doc2 = 'Since the Australian Open shifted to hardcourt in 1988, Wimbledon is the only major still played on grass'
doc_set = [Doc1, Doc2]

my_vocabulary= ['Grand Slam', 'Australian Open', 'French Open', 'US Open']

vectorizer = CountVectorizer(ngram_range=(2, 2))
vectorizer.fit_transform(my_vocabulary)
term_count = vectorizer.transform(doc_set)

# Show the index key for each bigram
vectorizer.vocabulary_
Out[11]: {'grand slam': 2, 'australian open': 0, 'french open': 1, 'us open': 3}

# Sparse matrix of bigram counts - each row corresponds to a document
term_count.toarray()
Out[12]: 
array([[1, 1, 1, 1],
       [1, 0, 0, 0]], dtype=int64)
您可以使用列表理解来修改词典2

dictionary2 = [('Grand', 'Slam'), ('Australian', 'Open'), ('French', 'Open'), ('US', 'Open')]
dictionary2 = [' '.join(tup) for tup in dictionary2]

dictionary2
Out[26]: ['Grand Slam', 'Australian Open', 'French Open', 'US Open']
编辑:基于以上内容,我认为您可以使用以下代码:

from sklearn.feature_extraction.text import CountVectorizer

# Modify dictionary2 to be compatible with CountVectorizer
dictionary2_cv = [' '.join(tup) for tup in dictionary2]

# Initialize and train CountVectorizer
cv2 = CountVectorizer(ngram_range=(2, 2))
cv2.fit_transform(dictionary2_cv)

for row in range(start,end+1):
    report_name = fund_reports_table.loc[row, "report_names"]
    raw_report = open("F:/EDGAR_ShareholderReports/" + report_name, 'r', encoding="utf8").read()

    ## word for word
    temp = cv1.fit_transform([raw_report]).toarray()
    res1 = np.concatenate((res1,temp),axis=0)

    ## big grams
    bigram=set()
    sentences = raw_report.split(".")
    for line in sentences:
        token = nltk.word_tokenize(line)
        bigram = bigram.union(set(list(ngrams(token, 2)))  )

    # Modify bigram to be compatible with CountVectorizer
    bigram = [' '.join(tup) for tup in bigram]

    # Note you must not fit_transform here - only transform using the trained cv2
    temp = cv2.transform(list(bigram)).toarray()
    res2=np.concatenate((res2,temp),axis=0)

我现在对问题的本质有了更好的理解,但问题仍然存在。考虑到大量文档,在一个“文档集”中添加所有文档是不可行的。此外,我更愿意“手动”构建Biggrams和生成的词汇表。我看不到输入(“词汇表”)是如何操作的用于CountVectorizer和fit_transform()的输入)必须是,才能使代码work@SAFEX我已根据您的代码编辑了我的答案,并提供了一个可能的解决方案。请注意,在CountVectorizer经过培训后,您不能适应_变换。我现在对问题的本质有了一些了解,但问题仍然存在。将所有文档添加到一个“文档集中”考虑到大量文档,这是不可行的。此外,我更愿意“手动”构建biggraams和生成的词汇表。我看不出输入是如何进行的(“词汇表”用于CountVectorizer,输入用于fit_transform()必须是,才能使代码work@SAFEX我已经根据您的代码编辑了我的答案,并提供了一个可能的解决方案。请注意,在您的CountVectorizer经过培训后,您不能适应_变换。