Java 如何获得;标题「;从使用HttpClient的网页

Java 如何获得;标题「;从使用HttpClient的网页,java,httpclient,http-head,Java,Httpclient,Http Head,我正在尝试使用ApacheHttpClient 4从网页中获取“标题” 编辑:我的第一种方法是尝试从标题中获取它(使用HttpHead)。如果这是不可能的,我怎么能像@Todd所说的那样从响应的主体中得到它呢 编辑2: <head> [...] <title>This is what I need to get!</title> [...] </head> [...] 这就是我需要得到的! [...] 谢谢大家的评论。一旦使用了jsoup,解决

我正在尝试使用ApacheHttpClient 4从网页中获取“标题”

编辑:我的第一种方法是尝试从标题中获取它(使用HttpHead)。如果这是不可能的,我怎么能像@Todd所说的那样从响应的主体中得到它呢

编辑2:

<head>
[...]
<title>This is what I need to get!</title>
[...]
</head>

[...]
这就是我需要得到的!
[...]

谢谢大家的评论。一旦使用了jsoup,解决方案就非常简单了

Document doc = Jsoup.connect("http://example.com/").get();
String title = doc.title();
考虑到我确实需要使用HttpClient进行连接,我有以下几点:

org.jsoup.nodes.Document doc = null;
String title = "";

System.out.println("Getting content... ");

CloseableHttpClient httpclient = HttpClients.createDefault();
HttpHost target = new HttpHost(host);
HttpGet httpget = new HttpGet(path);
CloseableHttpResponse response = httpclient.execute(target, httpget);

System.out.println("Parsing content... ");

try {
    String line = null;
    StringBuffer tmp = new StringBuffer();
    BufferedReader in = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
    while ((line = in.readLine()) != null) {                    
        String decoded = new String(line.getBytes(), "UTF-8");
        tmp.append(" ").append(decoded);
    }

    doc = Jsoup.parse(String.valueOf(tmp)); 

    title = doc.title();
    System.out.println("Title=" + title); //<== ^_^

    //[...]

} finally {
    response.close();
}

System.out.println("Done.");
org.jsoup.nodes.Document doc=null;
字符串标题=”;
System.out.println(“获取内容…”);
CloseableHttpClient httpclient=HttpClients.createDefault();
HttpHost目标=新的HttpHost(主机);
HttpGet HttpGet=新的HttpGet(路径);
CloseableHttpResponse response=httpclient.execute(目标,httpget);
System.out.println(“解析内容…”);
试一试{
字符串行=null;
StringBuffer tmp=新的StringBuffer();
BufferedReader in=新的BufferedReader(新的InputStreamReader(response.getEntity().getContent());
而((line=in.readLine())!=null){
已解码字符串=新字符串(line.getBytes(),“UTF-8”);
tmp.append(“”).append(已解码);
}
doc=Jsoup.parse(String.valueOf(tmp));
title=doc.title();

System.out.println(“Title=“+Title”);//感谢大家的评论。使用jsoup后,解决方案非常简单

Document doc = Jsoup.connect("http://example.com/").get();
String title = doc.title();
考虑到我确实需要使用HttpClient进行连接,我有以下几点:

org.jsoup.nodes.Document doc = null;
String title = "";

System.out.println("Getting content... ");

CloseableHttpClient httpclient = HttpClients.createDefault();
HttpHost target = new HttpHost(host);
HttpGet httpget = new HttpGet(path);
CloseableHttpResponse response = httpclient.execute(target, httpget);

System.out.println("Parsing content... ");

try {
    String line = null;
    StringBuffer tmp = new StringBuffer();
    BufferedReader in = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
    while ((line = in.readLine()) != null) {                    
        String decoded = new String(line.getBytes(), "UTF-8");
        tmp.append(" ").append(decoded);
    }

    doc = Jsoup.parse(String.valueOf(tmp)); 

    title = doc.title();
    System.out.println("Title=" + title); //<== ^_^

    //[...]

} finally {
    response.close();
}

System.out.println("Done.");
org.jsoup.nodes.Document doc=null;
字符串标题=”;
System.out.println(“获取内容…”);
CloseableHttpClient httpclient=HttpClients.createDefault();
HttpHost目标=新的HttpHost(主机);
HttpGet HttpGet=新的HttpGet(路径);
CloseableHttpResponse response=httpclient.execute(目标,httpget);
System.out.println(“解析内容…”);
试一试{
字符串行=null;
StringBuffer tmp=新的StringBuffer();
BufferedReader in=新的BufferedReader(新的InputStreamReader(response.getEntity().getContent());
而((line=in.readLine())!=null){
已解码字符串=新字符串(line.getBytes(),“UTF-8”);
tmp.append(“”).append(已解码);
}
doc=Jsoup.parse(String.valueOf(tmp));
title=doc.title();

System.out.println(“Title=“+Title”);//通过使用此代码片段,您仍然可以通过证明其URL来检索网页的

InputStream response = null;
    try {
        String url = "http://example.com/";
        response = new URL(url).openStream();


        Scanner scanner = new Scanner(response);
        String responseBody = scanner.useDelimiter("\\A").next();
        System.out.println(responseBody.substring(responseBody.indexOf("<title>") + 7, responseBody.indexOf("</title>")));

    } catch (IOException ex) {
        ex.printStackTrace();
    } finally {
        try {
            response.close();
        } catch (IOException ex) {
            ex.printStackTrace();
        }
    }
InputStream响应=null;
试一试{
字符串url=”http://example.com/";
response=新URL(URL).openStream();
扫描仪=新扫描仪(响应);
String ResponseBy=scanner.useDelimiter(“\\A”).next();
System.out.println(responseBody.substring(responseBody.indexOf(“”+7,responseBody.indexOf(“”));
}捕获(IOEX异常){
例如printStackTrace();
}最后{
试一试{
response.close();
}捕获(IOEX异常){
例如printStackTrace();
}
}

通过使用此代码片段,您仍然可以通过证明其URL来检索网页的

InputStream response = null;
    try {
        String url = "http://example.com/";
        response = new URL(url).openStream();


        Scanner scanner = new Scanner(response);
        String responseBody = scanner.useDelimiter("\\A").next();
        System.out.println(responseBody.substring(responseBody.indexOf("<title>") + 7, responseBody.indexOf("</title>")));

    } catch (IOException ex) {
        ex.printStackTrace();
    } finally {
        try {
            response.close();
        } catch (IOException ex) {
            ex.printStackTrace();
        }
    }
InputStream响应=null;
试一试{
字符串url=”http://example.com/";
response=新URL(URL).openStream();
扫描仪=新扫描仪(响应);
String ResponseBy=scanner.useDelimiter(“\\A”).next();
System.out.println(responseBody.substring(responseBody.indexOf(“”+7,responseBody.indexOf(“”));
}捕获(IOEX异常){
例如printStackTrace();
}最后{
试一试{
response.close();
}捕获(IOEX异常){
例如printStackTrace();
}
}

您确定目标服务器正在发回“标题”吗标题?标题不在响应标题中,它是响应正文中返回的HTML的一部分,如果有的话。@MrWiggles是的。我很确定。我正在测试,并且在部分中有一个标记。@Todd我编辑了我的问题…谢谢你检查一个基于Java的HTML解析器/脚本。你确定你的目标服务器已发送吗回一个“头衔”标题?标题不在响应标题中,它是响应正文中返回的HTML的一部分,如果有的话。@MrWiggles是的。我很确定。我正在用进行测试,并且在部分中有一个标记。@Todd我编辑了我的问题…感谢您检查基于Java的HTML解析器/Scraprin,如果您看到上面代码的http 403,add.userAgent(“Mozilla”)类文档doc=Jsoup.connect(“;String title=doc.title()”;如果您看到带有上述代码的http 403,则add.userAgent(“Mozilla”)类文档doc=Jsoup.connect(“;String title=doc.title()”;