Web crawler 如何抓取单个页面而不是其中包含的任何链接并输出源代码?

Web crawler 如何抓取单个页面而不是其中包含的任何链接并输出源代码?,web-crawler,phpcrawl,Web Crawler,Phpcrawl,我使用的是phpcrawl,下面是代码。我想爬网提到的链接,并得到所有的工作 <?php // It may take a whils to crawl a site ... set_time_limit(10000); // Inculde the phpcrawl-mainclass include("libs/PHPCrawler.class.php"); // Extend the class and override the

我使用的是
phpcrawl
,下面是代码。我想爬网提到的链接,并得到所有的工作

    <?php

    // It may take a whils to crawl a site ...
    set_time_limit(10000);

    // Inculde the phpcrawl-mainclass
    include("libs/PHPCrawler.class.php");

    // Extend the class and override the handleDocumentInfo()-method 
    class MyCrawler extends PHPCrawler 
    {
      function handleDocumentInfo($DocInfo) 
      {
        // Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
        if (PHP_SAPI == "cli") $lb = "\n";
         else $lb = "<br />";

    // Print the URL and the HTTP-status-Code
    echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;

    // Print the refering URL
    echo "Referer-page: ".$DocInfo->referer_url.$lb;

    // Print if the content of the document was be recieved or not
    if ($DocInfo->received == true)
      //echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
      echo $DocInfo->bytes_received;
    else
      echo "Content not received".$lb; 

    // Now you should do something with the content of the actual
    // received page or file ($DocInfo->source), we skip it in this example 

    echo $lb;

    flush();
    } 
    }

    // Now, create a instance of your class, define the behaviour
    // of the crawler (see class-reference for more options and details)
    // and start the crawling-process. 

    $crawler = new MyCrawler();

    // URL to crawl
    $crawler->setURL("http://careers.republic.co.uk/pb3/corporate/Republic/search.php?page=1 ");

    // Only receive content of files with content-type "text/html"
    $crawler->addContentTypeReceiveRule("#text/html#");

   // Ignore links to pictures, dont even request pictures
   $crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");

   // Store and send cookie-data like a browser does
   $crawler->enableCookieHandling(true);

   // Set the traffic-limit to 1 MB (in bytes,
   // for testing we dont want to "suck" the whole site)
   $crawler->setTrafficLimit(1000 * 1024);

   // Thats enough, now here we go
   $crawler->go();

   // At the end, after the process is finished, we print a short
   // report (see method getProcessReport() for more information)
   $report = $crawler->getProcessReport();

   if (PHP_SAPI == "cli") $lb = "\n";
   else $lb = "<br />";

   echo "Summary:".$lb;
   echo "Links followed: ".$report->links_followed.$lb;
   echo "Documents received: ".$report->files_received.$lb;
   echo "Bytes received: ".$report->bytes_received." bytes".$lb;
   echo "Process runtime: ".$report->process_runtime." sec".$lb; 
   ?>
现在,我通过传递链接来抓取它,但是它抓取了我们在页面源代码视图中看到的所有链接。但我只想看到我传递的链接的源代码,并使用xpath来完成工作

    <?php

    // It may take a whils to crawl a site ...
    set_time_limit(10000);

    // Inculde the phpcrawl-mainclass
    include("libs/PHPCrawler.class.php");

    // Extend the class and override the handleDocumentInfo()-method 
    class MyCrawler extends PHPCrawler 
    {
      function handleDocumentInfo($DocInfo) 
      {
        // Just detect linebreak for output ("\n" in CLI-mode, otherwise "<br>").
        if (PHP_SAPI == "cli") $lb = "\n";
         else $lb = "<br />";

    // Print the URL and the HTTP-status-Code
    echo "Page requested: ".$DocInfo->url." (".$DocInfo->http_status_code.")".$lb;

    // Print the refering URL
    echo "Referer-page: ".$DocInfo->referer_url.$lb;

    // Print if the content of the document was be recieved or not
    if ($DocInfo->received == true)
      //echo "Content received: ".$DocInfo->bytes_received." bytes".$lb;
      echo $DocInfo->bytes_received;
    else
      echo "Content not received".$lb; 

    // Now you should do something with the content of the actual
    // received page or file ($DocInfo->source), we skip it in this example 

    echo $lb;

    flush();
    } 
    }

    // Now, create a instance of your class, define the behaviour
    // of the crawler (see class-reference for more options and details)
    // and start the crawling-process. 

    $crawler = new MyCrawler();

    // URL to crawl
    $crawler->setURL("http://careers.republic.co.uk/pb3/corporate/Republic/search.php?page=1 ");

    // Only receive content of files with content-type "text/html"
    $crawler->addContentTypeReceiveRule("#text/html#");

   // Ignore links to pictures, dont even request pictures
   $crawler->addURLFilterRule("#\.(jpg|jpeg|gif|png)$# i");

   // Store and send cookie-data like a browser does
   $crawler->enableCookieHandling(true);

   // Set the traffic-limit to 1 MB (in bytes,
   // for testing we dont want to "suck" the whole site)
   $crawler->setTrafficLimit(1000 * 1024);

   // Thats enough, now here we go
   $crawler->go();

   // At the end, after the process is finished, we print a short
   // report (see method getProcessReport() for more information)
   $report = $crawler->getProcessReport();

   if (PHP_SAPI == "cli") $lb = "\n";
   else $lb = "<br />";

   echo "Summary:".$lb;
   echo "Links followed: ".$report->links_followed.$lb;
   echo "Documents received: ".$report->files_received.$lb;
   echo "Bytes received: ".$report->bytes_received." bytes".$lb;
   echo "Process runtime: ".$report->process_runtime." sec".$lb; 
   ?>

只需在phpcrawl中将页面限制设置为1:$crawler->setPageLimit(1);
(http://phpcrawl.cuab.de/classreferences/PHPCrawler/method_detail_tpl_method_setPageLimit.htm)

@NickWoodhams你能帮帮我吗。我看了你的帖子你好,再次感谢。但如果我拿出源代码,它会给我相同的原始页面。怎么能用来刮呢。请帮忙。你说的“刮擦”到底是什么意思?我不太明白你想做什么。