如何将句子加载到Python gensim中?
我试图使用Python中的如何将句子加载到Python gensim中?,python,nlp,gensim,Python,Nlp,Gensim,我试图使用Python中的gensim自然语言处理库中的模块 文档中说要初始化模型: 从gensim.models导入word2vec model=Word2Vec(句子,大小=100,窗口=5,最小计数=5,工作者=4) gensim希望输入的句子采用什么格式?我有原始文本 "the quick brown fox jumps over the lazy dogs" "Then a cop quizzed Mick Jagger's ex-wives briefly." etc. 我需要在
gensim
自然语言处理库中的模块
文档中说要初始化模型:
从gensim.models导入word2vec
model=Word2Vec(句子,大小=100,窗口=5,最小计数=5,工作者=4)
gensim
希望输入的句子采用什么格式?我有原始文本
"the quick brown fox jumps over the lazy dogs"
"Then a cop quizzed Mick Jagger's ex-wives briefly."
etc.
我需要在word2fec
中发布哪些附加处理
更新:以下是我尝试过的。当它加载句子时,我什么也得不到
>>> sentences = ['the quick brown fox jumps over the lazy dogs',
"Then a cop quizzed Mick Jagger's ex-wives briefly."]
>>> x = word2vec.Word2Vec()
>>> x.build_vocab([s.encode('utf-8').split( ) for s in sentences])
>>> x.vocab
{}
。您还可以从磁盘流式传输数据
确保它是utf-8,然后拆分它:
sentences = [ "the quick brown fox jumps over the lazy dogs",
"Then a cop quizzed Mick Jagger's ex-wives briefly." ]
word2vec.Word2Vec([s.encode('utf-8').split() for s in sentences], size=100, window=5, min_count=5, workers=4)
像前面指出的
alKid
一样,将其设置为utf-8
谈论另外两件你可能不得不担心的事情
import nltk, gensim
class FileToSent(object):
def __init__(self, filename):
self.filename = filename
self.stop = set(nltk.corpus.stopwords.words('english'))
def __iter__(self):
for line in open(self.filename, 'r'):
ll = [i for i in unicode(line, 'utf-8').lower().split() if i not in self.stop]
yield ll
然后呢,
sentences = FileToSent('sentence_file.txt')
model = gensim.models.Word2Vec(sentences=sentences, window=5, min_count=5, workers=4, hs=1)
实际上,句子必须是单词列表,而不是字符串,即
s.encode('utf-8').split()
Whoops对不起。更新。感谢RuntimeError:您必须在培训模型之前首先构建词汇表
启用日志记录并观察它所说的内容。这就是你的答案。剧透者:min\u count=5
@alKid回答得不错,但这是一个句子序列(一个可数),不一定是一个列表。当语句
大于RAM(即从磁盘流式传输)时,这会产生很大的差异。