C# 搜索“时没有点击”;mvc2“;使用lucene.net

C# 搜索“时没有点击”;mvc2“;使用lucene.net,c#,lucene.net,C#,Lucene.net,我正在使用lucene.net进行索引和搜索,我的代码唯一的问题是,在搜索“mvc2”时,它找不到任何命中率(它似乎与我搜索的所有其他单词一起工作),我尝试了不同的分析器(参见analyzer的注释)和旧的lucene代码,这是我的索引和搜索代码,如果有人能告诉我这个问题出在哪里,我会非常感激,谢谢 ////Indexing code public void DoIndexing(string CvContent) { //state the file location

我正在使用lucene.net进行索引和搜索,我的代码唯一的问题是,在搜索“mvc2”时,它找不到任何命中率(它似乎与我搜索的所有其他单词一起工作),我尝试了不同的分析器(参见analyzer的注释)和旧的lucene代码,这是我的索引和搜索代码,如果有人能告诉我这个问题出在哪里,我会非常感激,谢谢

////Indexing code
public void DoIndexing(string CvContent)
    {
        //state the file location of the index
        const string indexFileLocation = @"C:\RecruitmentIndexer\IndexedCVs";

        //if directory does not exist, create it, and create new index for it.
        //if directory does exist, do not create directory, do not create new      //index(add field to previous index).
        bool creatNewDirectory;   //to pass into lucene GetDirectory
        bool createNewIndex;      //to pass into lucene indexWriter
        if (!Directory.Exists(indexFileLocation))
        {
            creatNewDirectory = true;
            createNewIndex = true;
        }
        else
        {
            creatNewDirectory = false;
            createNewIndex = false;
        }


        Lucene.Net.Store.Directory dir =
            Lucene.Net.Store.FSDirectory.GetDirectory(indexFileLocation, creatNewDirectory);//creates if true

        //create an analyzer to process the text
        Lucene.Net.Analysis.Analyzer analyzer = new
        Lucene.Net.Analysis.SimpleAnalyzer();              //this analyzer gets all //hits exept mvc2 
        //Lucene.Net.Analysis.Standard.StandardAnalyzer(); //this leaves out sql once //and mvc2 once

        //create the index writer with the directory and analyzer defined.
        Lucene.Net.Index.IndexWriter indexWriter = new
        Lucene.Net.Index.IndexWriter(dir, analyzer,
            /*true to create a new index*/ createNewIndex);

        //create a document, add in a single field
        Lucene.Net.Documents.Document doc = new
        Lucene.Net.Documents.Document();



        Lucene.Net.Documents.Field fldContent =
          new Lucene.Net.Documents.Field("content",
          CvContent,//"This is some text to search by indexing",
          Lucene.Net.Documents.Field.Store.YES,
        Lucene.Net.Documents.Field.Index.ANALYZED,
        Lucene.Net.Documents.Field.TermVector.YES);

        doc.Add(fldContent);

        //write the document to the index
        indexWriter.AddDocument(doc);

        //optimize and close the writer
        indexWriter.Optimize();
        indexWriter.Close();

    }
///搜索代码
私有无效按钮2\u单击(对象发送者,事件参数e)
{
string SearchString=textBox1.Text;
///创建索引后,搜索
//说明索引的文件位置
常量字符串indexFileLocation=@“C:\RecruementIndexer\IndexedCVs”;
Lucene.Net.Store.dir目录=
Lucene.Net.Store.FSDirectory.GetDirectory(indexFileLocation,false);
//创建将执行搜索的索引搜索器
Lucene.Net.Search.IndexSearcher searcher=new
Lucene.Net.Search.indexsearch(dir);
SearchString=SearchString.Trim();
SearchString=QueryParser.Escape(SearchString);
//构建查询对象
Lucene.Net.Index.Term搜索术语=
新的Lucene.Net.Index.Term(“内容”,搜索字符串);
Lucene.Net.Search.Query Query=新的Lucene.Net.Search.TermQuery(searchTerm);
//执行查询
Lucene.Net.Search.Hits=searcher.Search(查询);
label1.Text=hits.Length().ToString();
//迭代结果。
对于(int i=0;i
我相信StandardAnalyzer实际上从“mvc2”中去掉了“2”,留下的索引词只有“mvc”。不过,我不确定SimpleAnalyzer。您可以尝试使用WhitespaceAnalyzer,我相信它不会去掉数字

您还应该像处理索引一样处理搜索输入。TermQuery是一个“完全相同”的匹配,这意味着如果您试图搜索“mvc2”,而索引中的实际字符串总是显示“mvc”,那么您将无法获得匹配

除非我使用QueryParser,否则我还没有找到一种真正使用分析器的方法,即使这样,我也总是得到奇怪的结果

您可以尝试这样做,以便以索引文档的相同方式“标记”搜索字符串,并对所有术语进行布尔and搜索:

    // We use a boolean query to combine all prefix queries
    var analyzer = new SimpleAnalyzer();
    var query = new BooleanQuery();

    using ( var reader = new StringReader( queryTerms ) )
    {
        // This is what we need to do in order to get the terms one by one, kind of messy but seemed to be the only way
        var tokenStream = analyzer.TokenStream( "why_do_I_need_this", reader );
        var termAttribute = tokenStream.GetAttribute( typeof( TermAttribute ) ) as TermAttribute;

        // This will return false when all tokens has been processed.
        while ( tokenStream.IncrementToken() )
        {
            var token = termAttribute.Term();
            query.Add( new PrefixQuery( new Term( KEYWORDS_FIELD_NAME, token ) ), BooleanClause.Occur.MUST );
        }

        // I don't know if this is necessary, but can't hurt
        tokenStream.Close();
    }

如果只需要完全匹配,可以将PrefixQuery替换为TermQuery(PrefixQuery将匹配以“search*”开头的任何内容而不是
术语查询
    // We use a boolean query to combine all prefix queries
    var analyzer = new SimpleAnalyzer();
    var query = new BooleanQuery();

    using ( var reader = new StringReader( queryTerms ) )
    {
        // This is what we need to do in order to get the terms one by one, kind of messy but seemed to be the only way
        var tokenStream = analyzer.TokenStream( "why_do_I_need_this", reader );
        var termAttribute = tokenStream.GetAttribute( typeof( TermAttribute ) ) as TermAttribute;

        // This will return false when all tokens has been processed.
        while ( tokenStream.IncrementToken() )
        {
            var token = termAttribute.Term();
            query.Add( new PrefixQuery( new Term( KEYWORDS_FIELD_NAME, token ) ), BooleanClause.Occur.MUST );
        }

        // I don't know if this is necessary, but can't hurt
        tokenStream.Close();
    }