Java 短语查询不工作Lucene 4.5.0
我尝试使用Java 短语查询不工作Lucene 4.5.0,java,lucene,Java,Lucene,我尝试使用PhraseQuery,但无法从搜索中获得点击率。我使用的是Lucene 4.5.0 我的索引代码 private IndexWriter writer; public LuceneIndexSF(final String indexDir) throws IOException { Analyzer analyzer = new KeywordAnalyzer(); IndexWriterConfig config = new IndexWriterConfig(V
PhraseQuery
,但无法从搜索中获得点击率。我使用的是Lucene 4.5.0
我的索引代码
private IndexWriter writer;
public LuceneIndexSF(final String indexDir) throws IOException {
Analyzer analyzer = new KeywordAnalyzer();
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_45,
analyzer);
Directory directory = FSDirectory.open(new File(indexDir));
writer = new IndexWriter(directory, config);
}
private Document getDocument(File f, String line, int lineNum)
throws IOException {
Document doc = new Document();
Field field = null;
if (line != null && line.split(DELIMITER).length >= 5) {
String[] lineValues = line.split(DELIMITER);
field = new Field("name", line.split("\t")[1],
TextField.TYPE_STORED);
doc.add(field);
if (lineValues[2] != null && !lineValues[2].trim().isEmpty()) {
field = new Field("ref", lineValues[2], TextField.TYPE_STORED);
doc.add(field);
}
field = new Field("type", lineValues[3], TextField.TYPE_STORED);
doc.add(field);
field = new LongField("code", Long.parseLong(lineValues[4]),
LongField.TYPE_STORED);
doc.add(field);
if (lineValues.length == 7 && lineValues[5] != null
&& !lineValues[5].trim().isEmpty()) {
field = new Field("alias1", lineValues[5],
TextField.TYPE_STORED);
doc.add(field);
}
if (lineValues.length == 7 && lineValues[6] != null
&& !lineValues[6].trim().isEmpty()) {
field = new Field("alias2", lineValues[6],
TextField.TYPE_STORED);
doc.add(field);
}
}
field = new IntField("linenum", lineNum, IntField.TYPE_STORED);
doc.add(field);
return doc;
}
.... and other code where i add document in writer using writer.addDocument(doc);
private static void search(String indexDir, String quer) throws IOException,
ParseException {
IndexReader inxRead = DirectoryReader.open(FSDirectory.open(new File(
indexDir)));
IndexSearcher is = new IndexSearcher(inxRead);
String[] termArr = quer.split(" ");
PhraseQuery phraseQuery= new PhraseQuery();
for(int inx = 0; inx < termArr.length; inx++){
phraseQuery.add(new Term("name", termArr[inx]));
}
phraseQuery.setSlop(4);
long start = System.currentTimeMillis();
TopDocs hits = is.search(phraseQuery, 1000);
long end = System.currentTimeMillis();
System.err.println("Parser> Found " + hits.totalHits
+ " document(s) (in " + (end - start)
+ " milliseconds) that matched query '" + multiQuery + "':");
for (ScoreDoc scoreDoc : hits.scoreDocs) {
Document doc = is.doc(scoreDoc.doc);
System.out.println("Parser> " + scoreDoc.score + " :: "
+ doc.get("type") + " - " + doc.get("code") + " - "
+ doc.get("name") + ", " + doc.get("linenum"));
}
inxRead.close();
}
我的搜索代码
private IndexWriter writer;
public LuceneIndexSF(final String indexDir) throws IOException {
Analyzer analyzer = new KeywordAnalyzer();
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_45,
analyzer);
Directory directory = FSDirectory.open(new File(indexDir));
writer = new IndexWriter(directory, config);
}
private Document getDocument(File f, String line, int lineNum)
throws IOException {
Document doc = new Document();
Field field = null;
if (line != null && line.split(DELIMITER).length >= 5) {
String[] lineValues = line.split(DELIMITER);
field = new Field("name", line.split("\t")[1],
TextField.TYPE_STORED);
doc.add(field);
if (lineValues[2] != null && !lineValues[2].trim().isEmpty()) {
field = new Field("ref", lineValues[2], TextField.TYPE_STORED);
doc.add(field);
}
field = new Field("type", lineValues[3], TextField.TYPE_STORED);
doc.add(field);
field = new LongField("code", Long.parseLong(lineValues[4]),
LongField.TYPE_STORED);
doc.add(field);
if (lineValues.length == 7 && lineValues[5] != null
&& !lineValues[5].trim().isEmpty()) {
field = new Field("alias1", lineValues[5],
TextField.TYPE_STORED);
doc.add(field);
}
if (lineValues.length == 7 && lineValues[6] != null
&& !lineValues[6].trim().isEmpty()) {
field = new Field("alias2", lineValues[6],
TextField.TYPE_STORED);
doc.add(field);
}
}
field = new IntField("linenum", lineNum, IntField.TYPE_STORED);
doc.add(field);
return doc;
}
.... and other code where i add document in writer using writer.addDocument(doc);
private static void search(String indexDir, String quer) throws IOException,
ParseException {
IndexReader inxRead = DirectoryReader.open(FSDirectory.open(new File(
indexDir)));
IndexSearcher is = new IndexSearcher(inxRead);
String[] termArr = quer.split(" ");
PhraseQuery phraseQuery= new PhraseQuery();
for(int inx = 0; inx < termArr.length; inx++){
phraseQuery.add(new Term("name", termArr[inx]));
}
phraseQuery.setSlop(4);
long start = System.currentTimeMillis();
TopDocs hits = is.search(phraseQuery, 1000);
long end = System.currentTimeMillis();
System.err.println("Parser> Found " + hits.totalHits
+ " document(s) (in " + (end - start)
+ " milliseconds) that matched query '" + multiQuery + "':");
for (ScoreDoc scoreDoc : hits.scoreDocs) {
Document doc = is.doc(scoreDoc.doc);
System.out.println("Parser> " + scoreDoc.score + " :: "
+ doc.get("type") + " - " + doc.get("code") + " - "
+ doc.get("name") + ", " + doc.get("linenum"));
}
inxRead.close();
}
解决方案
根据Arun的回答,短语查询
要正常工作,需要Analyzer标记文档
的字段
中的每个单词。对于我的案例,我使用了小写过滤器
,使所有查询都小写,这样它就可以在不区分大小写的情况下工作。并使用用于自动完成目的的EdgeNGramTokenFilter
public LuceneIndexSF(final String indexDir) throws IOException {
Analyzer analyzer = new Analyzer() {
@Override
protected TokenStreamComponents createComponents(String fieldName,
java.io.Reader reader) {
Tokenizer source = new StandardTokenizer(Version.LUCENE_45,
reader);
TokenStream result = new StandardFilter(Version.LUCENE_45,
source);
result = new LowerCaseFilter(Version.LUCENE_45, result);
result = new EdgeNGramTokenFilter(Version.LUCENE_45, result, 1,
20);
return new TokenStreamComponents(source, result);
}
};
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_45,
analyzer);
Directory directory = FSDirectory.open(new File(indexDir));
writer = new IndexWriter(directory, config);
}
我的最终搜索方法
private static void search(String indexDir, String quer) throws IOException,
ParseException {
IndexReader inxRead = DirectoryReader.open(FSDirectory.open(new File(
indexDir)));
IndexSearcher is = new IndexSearcher(inxRead);
String[] termArr = quer.split(" ");
PhraseQuery query1 = new PhraseQuery();
PhraseQuery query2 = new PhraseQuery();
PhraseQuery query3 = new PhraseQuery();
for (int inx = 0; inx < termArr.length; inx++) {
query1.add(new Term(SchoolFinderConstant.ENTITY_NAME,termArr[inx]),inx);
query2.add(new Term(SchoolFinderConstant.ENTITY_ALIAS1,termArr[inx]),inx);
query3.add(new Term(SchoolFinderConstant.ENTITY_ALIAS2,termArr[inx]),inx);
}
BooleanQuery mainQuery = new BooleanQuery();
mainQuery.add(query1, Occur.SHOULD);
mainQuery.add(query2, Occur.SHOULD);
mainQuery.add(query3, Occur.SHOULD);
long start = System.currentTimeMillis();
TopDocs hits = is.search(mainQuery, 1000);
long end = System.currentTimeMillis();
System.err.println("Parser> Found " + hits.totalHits
+ " document(s) (in " + (end - start)
+ " milliseconds) that matched query '" + multiQuery + "':");
for (ScoreDoc scoreDoc : hits.scoreDocs) {
Document doc = is.doc(scoreDoc.doc);
System.out.println("Parser> " + scoreDoc.score + " :: "
+ doc.get("type") + " - " + doc.get("code") + " - "
+ doc.get("name") + ", " + doc.get("linenum"));
}
inxRead.close();
}
private static void search(String indexDir,String query)抛出IOException,
语法异常{
IndexReader inxRead=DirectoryReader.open(FSDirectory.open)(新文件(
indexDir),;
IndexSearcher is=新的IndexSearcher(inxRead);
字符串[]termArr=query.split(“”);
PhraseQuery query1=新PhraseQuery();
PhraseQuery query2=新PhraseQuery();
PhraseQuery query3=新PhraseQuery();
对于(int-inx=0;inx已找到”+hits.totalHits
+“文件”(在“+”中)(结束-开始)
+匹配查询“+”多重查询“:”)的“毫秒”;
for(ScoreDoc ScoreDoc:hits.scoreDocs){
单据单据=is.doc(scoreDoc.doc);
System.out.println(“解析器>”+scoreDoc.score+”:“
+doc.get(“type”)+“-”+doc.get(“code”)+“-”
+doc.get(“name”)+,“+doc.get(“linenum”);
}
inxRead.close();
}
我用KeywordAnalyzer处理了您的代码,显然它不起作用,因为KeywordAnalyzer将整个流“标记”为单个标记。这对于邮政编码、ID和某些产品名称等数据非常有用,要使其工作,您需要在不做任何更改的情况下指定整个令牌
然后我使用了WhitespaceAnalyzer,它可以为您的短语查询找到匹配项。对代码的其余部分没有更改。让我知道这是否对你有效
使用MultiFieldQueryParser进行搜索的原因是,在查询与索引匹配的内容时,您必须使用分析器。因此,简而言之,您需要确保索引分析器和查询时间分析器是相似的 是否检查字符串[]lineValues=line.split(分隔符);field=新字段(“名称”,行分割(“\t”)[1],TextField.TYPE\u存储);零件是否发送了您认为是索引的正确值?特别是查看intoline.split(“\t”)[1],如果evrything good可以将代码的完整Java类发布到某个地方,其中包含一行可用于测试的代码,这样我就可以对它进行旋转,看看有什么问题了吗?@Arun是的,它确实提供了正确的值,我已经检查过了,奇怪的事情是搜索工作符合<代码> MultiFieldQueryParser <代码>,但不符合<代码>短语> <代码>。如果你需要的话,是单线“7044国际中等教育委员会通用证书6 IGCSE,剑桥大学国际考试”。你可以执行测试onok,我今晚会检查它,你能告诉我为什么它不能与
StandardAnalyzer
一起工作吗?StandardAnalyzer会删除函数词,你需要看看它在索引什么。或者,您需要使用Stabdard analyzer查询Parser来运行您的查询,以便以相同的方式处理您的查询。