Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/328.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/google-cloud-platform/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 使用Jsoup解析URL我得到了重复的URL_Java_Jsoup - Fatal编程技术网

Java 使用Jsoup解析URL我得到了重复的URL

Java 使用Jsoup解析URL我得到了重复的URL,java,jsoup,Java,Jsoup,我解析特定的Url,并尝试将所有内部链接保存在allInnerLinks ArrayList中的同一域中,将所有外部Url保存在Allexternalinks ArrayList中 public void go() { Document doc; baseUrl = CountLinks.result3; try { // need http protocol doc = Jsoup .connect(u

我解析特定的Url,并尝试将所有内部链接保存在allInnerLinks ArrayList中的同一域中,将所有外部Url保存在Allexternalinks ArrayList中

public void go() {
    Document doc;
    baseUrl = CountLinks.result3;
    try {

        // need http protocol

        doc = Jsoup
                .connect(url)
                .userAgent(
                        "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.0) Gecko/20100101 Firefox/25.0")
                .referrer("http://www.google.com").timeout(1000 * 5)
                .ignoreContentType(true).get();
        // get page title
        String title = doc.title();

        // get all links
        Elements links = doc.select("a[href]");

        for (Element link : links) {
            // !!!
            // String absUrl = link.absUrl("href");
            String absUrl = link.attr("abs:href");


            // get the value from href attribute
            if (absUrl.contains(baseUrl)
                    && !(absUrl.contains("mailto"))) {
                allInnerLinks.add(absUrl);
                allInnerLinksCounter++;
            } else {
                allExternalLinks.add(absUrl);
                allExternalLinksCounter++;
            }

        }

    } catch (NullPointerException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    } catch (HttpStatusException e) {
        e.printStackTrace();
        System.out.println(e.getUrl());
    } catch (IOException e) {
        e.printStackTrace();
    }
}
但最后我得到了重复的元素。相同的URL但数字符号出现在链接的末尾。我不明白我是怎么得到它的:

页面URL外部URL -------------------------------------------- http://hostingmaks.com/category/news/ https://meetings.webex.com/ http://hostingmaks.com/category/news/ https://meetings.webex.com/
发生这种情况的原因是什么?

我在下面写了一个简单的方法来检查是否有返回布尔值的尾随hashtag/pound符号/数字符号

import java.util.regex.Matcher;
import java.util.regex.Pattern;

public boolean hasHashTag(String url) {
    int index = url.lastIndexOf("#");
    if(index == -1) {
        return false;
    } else {
        Pattern p = Pattern.compile("[^a-z0-9 ]", Pattern.CASE_INSENSITIVE);
        Matcher m = p.matcher(url.substring(index+1));
        System.out.println(url.substring(index+1) + "   "+  (index + 1));
        return !m.find();
    }
}
现在可以使用此方法过滤掉重复项

if(hasHashTag(URLHERE)) {
    //don't add to urls to search
} else {
    //add url to search
}

指页面上的特定书签,如子节。这可能是有意义的,这些被分开处理,因为一些网站使用文字后记仍然控制页面,例如example.com//somePage.@Pokechu22实际上如果我更改行字符串absUrl=link.attrabs:href;字符串absUrl=link.attrref;然后一切都很好,没有链接。但在这种情况下,程序将/docs/example1.doc计算为外部链接,这是不正确的。这就是为什么我使用abs:href绝对URL,但我有重复的。