Python 使用Scrapy对网页中的URL进行爬网
我正在使用scrapy从某些网站提取数据。问题是我的蜘蛛只能抓取初始起始URL的网页,它无法抓取网页中的URL。 我复制了同样的蜘蛛:Python 使用Scrapy对网页中的URL进行爬网,python,web-crawler,scrapy,Python,Web Crawler,Scrapy,我正在使用scrapy从某些网站提取数据。问题是我的蜘蛛只能抓取初始起始URL的网页,它无法抓取网页中的URL。 我复制了同样的蜘蛛: from scrapy.spider import BaseSpider from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from scrapy.http i
from scrapy.spider import BaseSpider
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
from scrapy.utils.response import get_base_url
from scrapy.utils.url import urljoin_rfc
from nextlink.items import NextlinkItem
class Nextlink_Spider(BaseSpider):
name = "Nextlink"
allowed_domains = ["Nextlink"]
start_urls = ["http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"]
def parse(self, response):
hxs = HtmlXPathSelector(response)
sites = hxs.select('//body/div[2]/div[3]/div/ul/li[2]/a/@href')
for site in sites:
relative_url = site.extract()
url = self._urljoin(response,relative_url)
yield Request(url, callback = self.parsetext)
def parsetext(self, response):
log = open("log.txt", "a")
log.write("test if the parsetext is called")
hxs = HtmlXPathSelector(response)
items = []
texts = hxs.select('//div').extract()
for text in texts:
item = NextlinkItem()
item['text'] = text
items.append(item)
log = open("log.txt", "a")
log.write(text)
return items
def _urljoin(self, response, url):
"""Helper to convert relative urls to absolute"""
return urljoin_rfc(response.url, url, response.encoding)
我使用log.txt来测试是否调用了parsetext。但是,在我运行spider之后,log.txt中没有任何内容。我的猜测是这样的:
allowed_domains = ["Nextlink"]
这不是类似于domain.tld的域,因此它将拒绝任何链接。
如果您从以下位置获取示例:allowed_domains=[“dmoz.org”]
请参见此处:
允许的\u域
包含允许此爬行器爬网的域的字符串的可选列表。如果启用OffItemIddleware,则不会跟踪不属于此列表中指定的域名的URL请求
因此,只要您没有在设置中激活OffsiteMiddleware,这并不重要,您可以将允许的\u域
完全排除在外
检查settings.py是否激活OffItemIDdleware。如果你想让你的爬行器在任何域上爬行,它不应该被激活。我认为问题在于,你没有告诉Scrapy跟随每个爬行的URL。对于我自己的博客,我实现了一个爬行蜘蛛,它使用基于LinkExtractor的规则从我的博客页面中提取所有相关链接:
# -*- coding: utf-8 -*-
'''
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
* @author Marcel Lange <info@ask-sheldon.com>
* @package ScrapyCrawler
'''
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
import Crawler.settings
from Crawler.items import PageCrawlerItem
class SheldonSpider(CrawlSpider):
name = Crawler.settings.CRAWLER_NAME
allowed_domains = Crawler.settings.CRAWLER_DOMAINS
start_urls = Crawler.settings.CRAWLER_START_URLS
rules = (
Rule(
LinkExtractor(
allow_domains=Crawler.settings.CRAWLER_DOMAINS,
allow=Crawler.settings.CRAWLER_ALLOW_REGEX,
deny=Crawler.settings.CRAWLER_DENY_REGEX,
restrict_css=Crawler.settings.CSS_SELECTORS,
canonicalize=True,
unique=True
),
follow=True,
callback='parse_item',
process_links='filter_links'
),
)
# Filter links with the nofollow attribute
def filter_links(self, links):
return_links = list()
if links:
for link in links:
if not link.nofollow:
return_links.append(link)
else:
self.logger.debug('Dropped link %s because nofollow attribute was set.' % link.url)
return return_links
def parse_item(self, response):
# self.logger.info('Parsed URL: %s with STATUS %s', response.url, response.status)
item = PageCrawlerItem()
item['status'] = response.status
item['title'] = response.xpath('//title/text()')[0].extract()
item['url'] = response.url
item['headers'] = response.headers
return item
#-*-编码:utf-8-*-
'''
*此程序是免费软件:您可以重新发布和/或修改它
*它是根据GNU通用公共许可证的条款发布的
*自由软件基金会,或者许可证的第3版,或者
*(由您选择)任何更高版本。
*
*这个节目的发布是希望它会有用,
*但无任何保证;甚至没有任何关于
*适销性或适合某一特定目的。见
*有关更多详细信息,请参阅GNU通用公共许可证。
*
*您应该已经收到GNU通用公共许可证的副本
*和这个节目一起。如果没有,请参阅。
*
*@作者马塞尔·兰格
*@package ScrapyCrawler
'''
从scrapy.spider导入爬行蜘蛛,规则
从scrapy.LinkExtractor导入LinkExtractor
导入爬虫程序设置
从Crawler.items导入页面crawleritem
谢尔顿蜘蛛类(爬行蜘蛛):
名称=Crawler.settings.Crawler\u名称
允许的\u域=Crawler.settings.Crawler\u域
开始\u URL=Crawler.settings.Crawler\u开始\u URL
规则=(
统治(
链接抽取器(
允许\u域=Crawler.settings.Crawler\u域,
allow=Crawler.settings.Crawler\u allow\u REGEX,
deny=Crawler.settings.Crawler\u deny\u REGEX,
restrict\u css=Crawler.settings.css\u选择器,
规范化=真,
唯一=真
),
follow=True,
callback='parse_item',
处理链接=“过滤链接”
),
)
#使用nofollow属性筛选链接
def过滤器链接(自身、链接):
return_links=list()
如果链接:
对于链接中的链接:
如果不是link.nofollow:
返回链接。追加(链接)
其他:
self.logger.debug('由于设置了nofollow属性而丢弃了链接%s。'%link.url)
返回链接
def解析_项(自身、响应):
#self.logger.info('解析的URL:%s,状态为%s',response.URL,response.STATUS)
item=PageCrawleItem()
项目['status']=响应状态
项['title']=response.xpath('//title/text()')[0].extract()
项['url']=response.url
item['headers']=response.headers
退货项目
在上,我详细描述了如何实现一个网站爬虫来预热我的Wordpress fullpage缓存。你不相信关闭文件句柄吗?从未使用过scrapy,但是你读的不是完全正确的-allowed_domains只是allowed_domains的一个可选字符串列表-但是只有当你激活OffItemIDdleware时才会考虑它,默认情况下,OffItemIDdleware没有激活。我在allowed_domains中意外地输入了一个值,同样的问题,这让我有了相当长的时间。这个故事的寓意是:除非你真的需要,否则不要放在允许的域中