Java 如何在使用lucene analyzer时添加短语作为停止词?

Java 如何在使用lucene analyzer时添加短语作为停止词?,java,lucene,tokenize,Java,Lucene,Tokenize,我正在使用Lucene 4.6.1库。我正试图把hip-hop这个词添加到我的stopword排除列表中 我可以排除它,如果它写为-嘻哈(作为一个词),但当它写的像嘻哈(与空格之间),我不能排除它 以下是我的排除列表逻辑- public static final CharArraySet STOP_SET_STEM = new CharArraySet(LUCENE_VERSION, Arrays.asList( "hiphop","hip hop" ), false); 有关自定义分析器

我正在使用Lucene 4.6.1库。我正试图把hip-hop这个词添加到我的stopword排除列表中

我可以排除它,如果它写为-嘻哈(作为一个词),但当它写的像嘻哈(与空格之间),我不能排除它

以下是我的排除列表逻辑-

public static final CharArraySet STOP_SET_STEM = new CharArraySet(LUCENE_VERSION, Arrays.asList(

"hiphop","hip hop"

), false);
有关自定义分析器逻辑的更多详细信息-

public final class CustomWordsAnalyzer extends StopwordAnalyzerBase {
  private static final Version LUCENE_VERSION = Version.LUCENE_46;

  // Regex used to exclude non-alpha-numeric tokens
  private static final Pattern ALPHA_NUMERIC = Pattern.compile("^[a-z][a-z0-9_]+$");
  private static final Matcher MATCHER = ALPHA_NUMERIC.matcher("");

  public CustomWordsAnalyzer() {
    super(LUCENE_VERSION, ProTextWordLists.STOP_SET);
  }

  public CustomWordsAnalyzer(CharArraySet stopSet) {
    super(LUCENE_VERSION, stopSet);

  }

  @Override
  protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
    Tokenizer tokenizer = new StandardTokenizer(LUCENE_VERSION, reader);
    TokenStream result = new StandardFilter(LUCENE_VERSION, tokenizer);
    result = new LowerCaseFilter(LUCENE_VERSION, result);
    result = new ASCIIFoldingFilter(result);
    result = new AlphaNumericMaxLengthFilter(result);
    result = new StopFilter(LUCENE_VERSION, result, ProTextWordLists.STOP_SET);

    result = new PorterStemFilter(result);
    result = new StopFilter(LUCENE_VERSION, result, ProTextWordLists.STOP_SET_STEM);
    return new TokenStreamComponents(tokenizer, result);
  }

  /**
   * Matches alpha-numeric tokens between 2 and 40 chars long.
   */
  static class AlphaNumericMaxLengthFilter extends TokenFilter {
    private final CharTermAttribute termAtt;
    private final char[] output = new char[28];

    AlphaNumericMaxLengthFilter(TokenStream in) {
      super(in);
      termAtt = addAttribute(CharTermAttribute.class);
    }

    @Override
    public final boolean incrementToken() throws IOException {
      // return the first alpha-numeric token between 2 and 40 length
      while (input.incrementToken()) {
        int length = termAtt.length();
        if (length >= 3 && length <= 28) {
          char[] buf = termAtt.buffer();
          int at = 0;
          for (int c = 0; c < length; c++) {
            char ch = buf[c];
            if (ch != '\'') {
              output[at++] = ch;
            }
          }
          String term = new String(output, 0, at);
          MATCHER.reset(term);
          if (MATCHER.matches() && !term.startsWith("a0")) {
            termAtt.setEmpty();
            termAtt.append(term);
            return true;
          }
        }
      }
      return false;
    }
  }
}
下面是我的customanalyzer逻辑-

public final class CustomWordsAnalyzer extends StopwordAnalyzerBase {
  private static final Version LUCENE_VERSION = Version.LUCENE_46;

  // Regex used to exclude non-alpha-numeric tokens
  private static final Pattern ALPHA_NUMERIC = Pattern.compile("^[a-z][a-z0-9_]+$");
  private static final Matcher MATCHER = ALPHA_NUMERIC.matcher("");

  public CustomWordsAnalyzer() {
    super(LUCENE_VERSION, ProTextWordLists.STOP_SET);
  }

  public CustomWordsAnalyzer(CharArraySet stopSet) {
    super(LUCENE_VERSION, stopSet);

  }

  @Override
  protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
    Tokenizer tokenizer = new StandardTokenizer(LUCENE_VERSION, reader);
    TokenStream result = new StandardFilter(LUCENE_VERSION, tokenizer);
    result = new LowerCaseFilter(LUCENE_VERSION, result);
    result = new ASCIIFoldingFilter(result);
    result = new AlphaNumericMaxLengthFilter(result);
    result = new StopFilter(LUCENE_VERSION, result, ProTextWordLists.STOP_SET);

    result = new PorterStemFilter(result);
    result = new StopFilter(LUCENE_VERSION, result, ProTextWordLists.STOP_SET_STEM);
    return new TokenStreamComponents(tokenizer, result);
  }

  /**
   * Matches alpha-numeric tokens between 2 and 40 chars long.
   */
  static class AlphaNumericMaxLengthFilter extends TokenFilter {
    private final CharTermAttribute termAtt;
    private final char[] output = new char[28];

    AlphaNumericMaxLengthFilter(TokenStream in) {
      super(in);
      termAtt = addAttribute(CharTermAttribute.class);
    }

    @Override
    public final boolean incrementToken() throws IOException {
      // return the first alpha-numeric token between 2 and 40 length
      while (input.incrementToken()) {
        int length = termAtt.length();
        if (length >= 3 && length <= 28) {
          char[] buf = termAtt.buffer();
          int at = 0;
          for (int c = 0; c < length; c++) {
            char ch = buf[c];
            if (ch != '\'') {
              output[at++] = ch;
            }
          }
          String term = new String(output, 0, at);
          MATCHER.reset(term);
          if (MATCHER.matches() && !term.startsWith("a0")) {
            termAtt.setEmpty();
            termAtt.append(term);
            return true;
          }
        }
      }
      return false;
    }
  }
}
公共最终类CustomWordsAnalyzer扩展了StopwordAnalyzerBase{
私有静态最终版本LUCENE_Version=Version.LUCENE_46;
//用于排除非字母数字标记的正则表达式
私有静态最终模式ALPHA_NUMERIC=Pattern.compile(“^[a-z][a-z0-9_]+$”;
私有静态最终匹配器Matcher=ALPHA_NUMERIC.Matcher(“”);
公共CustomWordsAnalyzer(){
超级(LUCENE_版本,protextWordList.STOP_集);
}
公共CustomWordsAnAnalyzer(字符集停止集){
超级(LUCENE_版本,stopSet);
}
@凌驾
受保护的TokenStreamComponents createComponents(字符串字段名、读卡器){
标记器标记器=新的标准标记器(LUCENE_版本,阅读器);
TokenStream结果=新的标准过滤器(LUCENE_版本,标记器);
结果=新的小写过滤器(LUCENE_版本,结果);
结果=新的ASCIIFoldingFilter(结果);
结果=新AlphaNumericMaxLengthFilter(结果);
结果=新的StopFilter(LUCENE_版本、结果、ProtextWordList.STOP_集);
结果=新PorterStemFilter(结果);
结果=新的停止过滤器(LUCENE\u版本、结果、ProtextWordList.STOP\u SET\u系统);
返回新的TokenStreamComponents(标记器、结果);
}
/**
*匹配长度在2到40个字符之间的字母数字标记。
*/
静态类AlphaNumericMaxLengthFilter扩展了TokenFilter{
私人最终特许权转让条款;
私有最终字符[]输出=新字符[28];
AlphaNumericMaxLengthFilter(令牌流输入){
超级(in),;
termAtt=addAttribute(charterMatAttribute.class);
}
@凌驾
public final boolean incrementToken()引发IOException{
//返回长度介于2和40之间的第一个字母数字标记
while(input.incrementToken()){
int length=termAtt.length();

如果(length>=3&&length无法使用默认的Lucene实现,那么唯一的方法是创建您自己的分析器或令牌流或两者,它将以您需要的方式处理数据/查询(例如,过滤短语)

是的,我已经创建了自己的分析仪,但我仍然无法这样做。我想我可能做错了什么。是的,可能是,请显示您的分析仪代码-放在pastebin或GistHanks Mystion中!我只是在问题陈述中包含了我的分析仪逻辑。如果有任何帮助,我们将不胜感激。您能看到吗?停止字是一个单词,而不是p您能为这个定制的Analyzer impl编写一个测试吗?