Python 如何在嗖嗖声中使用n-grams

Python 如何在嗖嗖声中使用n-grams,python,autocomplete,n-gram,whoosh,Python,Autocomplete,N Gram,Whoosh,我正在尝试使用n-grams来获得使用Whoosh的“自动完成风格”搜索。不幸的是,我有点困惑。我制作了这样一个索引: if not os.path.exists("index"): os.mkdir("index") ix = create_in("index", schema) ix = open_dir("index") writer = ix.writer() q = MyTable.select() for item in q: print 'adding %s'

我正在尝试使用n-grams来获得使用Whoosh的“自动完成风格”搜索。不幸的是,我有点困惑。我制作了这样一个索引:

if not os.path.exists("index"):
    os.mkdir("index")
ix = create_in("index", schema)

ix = open_dir("index")

writer = ix.writer()
q = MyTable.select()
for item in q:
    print 'adding %s' % item.Title
    writer.add_document(title=item.Title, content=item.content, url = item.URL)
writer.commit()
querystring = 'my search string'

parser = QueryParser("title", ix.schema)
myquery = parser.parse(querystring)

with ix.searcher() as searcher:
    results = searcher.search(myquery)
    print len(results)

    for r in results:
        print r
# not sure how your schema looks like exactly
schema = Schema(
    title=NGRAMWORDS(minsize=2, maxsize=10, stored=True, field_boost=1.0, tokenizer=None, at='start', queryor=False, sortable=False)
    content=TEXT(stored=True),
    url=title=ID(stored=True),
    spelling=TEXT(stored=True, spelling=True)) # typeahead field

if not os.path.exists("index"):
os.mkdir("index")
ix = create_in("index", schema)

ix = open_dir("index")

writer = ix.writer()
q = MyTable.select()
for item in q:
    print 'adding %s' % item.Title
    writer.add_document(title=item.Title, content=item.content, url = item.URL)
    writer.add_document(spelling=item.Title) # adding item title to typeahead field
    self.addContentToSpelling(writer, item.content) # some method that adds some content words to typeheadfield if needed. The same way as above.
writer.commit()
然后我搜索标题字段,如下所示:

if not os.path.exists("index"):
    os.mkdir("index")
ix = create_in("index", schema)

ix = open_dir("index")

writer = ix.writer()
q = MyTable.select()
for item in q:
    print 'adding %s' % item.Title
    writer.add_document(title=item.Title, content=item.content, url = item.URL)
writer.commit()
querystring = 'my search string'

parser = QueryParser("title", ix.schema)
myquery = parser.parse(querystring)

with ix.searcher() as searcher:
    results = searcher.search(myquery)
    print len(results)

    for r in results:
        print r
# not sure how your schema looks like exactly
schema = Schema(
    title=NGRAMWORDS(minsize=2, maxsize=10, stored=True, field_boost=1.0, tokenizer=None, at='start', queryor=False, sortable=False)
    content=TEXT(stored=True),
    url=title=ID(stored=True),
    spelling=TEXT(stored=True, spelling=True)) # typeahead field

if not os.path.exists("index"):
os.mkdir("index")
ix = create_in("index", schema)

ix = open_dir("index")

writer = ix.writer()
q = MyTable.select()
for item in q:
    print 'adding %s' % item.Title
    writer.add_document(title=item.Title, content=item.content, url = item.URL)
    writer.add_document(spelling=item.Title) # adding item title to typeahead field
    self.addContentToSpelling(writer, item.content) # some method that adds some content words to typeheadfield if needed. The same way as above.
writer.commit()
这很有效。但我想在自动补全中使用它,它不匹配部分单词(例如搜索“ant”将返回“ant”,但不返回“antelope”或“anteater”)。当然,这大大妨碍了将其用于自动完成。政府说要使用这一点:

analyzer = analysis.NgramWordAnalyzer()
title_field = fields.TEXT(analyzer=analyzer, phrase=False)
schema = fields.Schema(title=title_field)

但我对此感到困惑。这似乎只是过程的“中间”,当我构建索引时,我是否必须将标题字段作为NGRAM字段(而不是文本)包含?我如何进行搜索?因此,当我搜索“ant”时,我得到[“ant”、“anteater”、“antelope”等?

我通过创建两个独立的字段解决了这个问题。一个用于实际搜索,一个用于建议。NGRAM或NGRAMWORDS字段类型可用于“模糊搜索”功能。在您的情况下,可能是这样的:

if not os.path.exists("index"):
    os.mkdir("index")
ix = create_in("index", schema)

ix = open_dir("index")

writer = ix.writer()
q = MyTable.select()
for item in q:
    print 'adding %s' % item.Title
    writer.add_document(title=item.Title, content=item.content, url = item.URL)
writer.commit()
querystring = 'my search string'

parser = QueryParser("title", ix.schema)
myquery = parser.parse(querystring)

with ix.searcher() as searcher:
    results = searcher.search(myquery)
    print len(results)

    for r in results:
        print r
# not sure how your schema looks like exactly
schema = Schema(
    title=NGRAMWORDS(minsize=2, maxsize=10, stored=True, field_boost=1.0, tokenizer=None, at='start', queryor=False, sortable=False)
    content=TEXT(stored=True),
    url=title=ID(stored=True),
    spelling=TEXT(stored=True, spelling=True)) # typeahead field

if not os.path.exists("index"):
os.mkdir("index")
ix = create_in("index", schema)

ix = open_dir("index")

writer = ix.writer()
q = MyTable.select()
for item in q:
    print 'adding %s' % item.Title
    writer.add_document(title=item.Title, content=item.content, url = item.URL)
    writer.add_document(spelling=item.Title) # adding item title to typeahead field
    self.addContentToSpelling(writer, item.content) # some method that adds some content words to typeheadfield if needed. The same way as above.
writer.commit()
然后,搜索的时间:

origQueryString = 'my search string'
words = self.splitQuery(origQueryString) # use tokenizers / analyzers or self implemented
queryString = origQueryString # would be better to actually create a query
corrector = ix.searcher().corrector("spelling")
for word in words:
    suggestionList = corrector.suggest(word, limit=self.limit)
    for suggestion in suggestionList:
         queryString = queryString + " " + suggestion # would be better to actually create a query      

parser = QueryParser("title", ix.schema)
myquery = parser.parse(querystring)

with ix.searcher() as searcher:
     results = searcher.search(myquery)
     print len(results)

    for r in results:
        print r
希望你能明白