使用scrapy获取下一页

使用scrapy获取下一页,scrapy,web-crawler,Scrapy,Web Crawler,我有兴趣从此页面获取亚特兰大的承包商数据: 因此,我可以打开类别的链接 “增加和改造” “建筑师与工程师” “喷泉和池塘” …… ..… ..... 但我只能打开第一页: 我正试图通过“下一步”按钮的链接打开“获取下一步”: next_page_url = response.xpath('/html/body/div[1]/center/table/tr[8]/td[2]/a/@href').extract_first() absolute_next_page_url =

我有兴趣从此页面获取亚特兰大的承包商数据:

因此,我可以打开类别的链接

“增加和改造”
“建筑师与工程师”
“喷泉和池塘”
……
..…
.....

但我只能打开第一页:

我正试图通过“下一步”按钮的链接打开“获取下一步”:

next_page_url = response.xpath('/html/body/div[1]/center/table/tr[8]/td[2]/a/@href').extract_first()
absolute_next_page_url = response.urljoin(next_page_url)
request = scrapy.Request(absolute_next_page_url)
yield request
但这没什么区别

这是我的蜘蛛的代码:

import scrapy


class Spider_1800(scrapy.Spider):
    name = '1800contractor'
    allowed_domains = ['1800contractor.com']
    start_urls = (
        'http://www.1800contractor.com/d.Atlanta.GA.html?link_id=3658',
    )

    def parse(self, response):
        urls = response.xpath('/html/body/center/table/tr/td[2]/table/tr[6]/td/table/tr[2]/td/b/a/@href').extract()

        for url in urls:
            absolute_url = response.urljoin(url)
            request = scrapy.Request(
                absolute_url, callback=self.parse_contractors)
            yield request

        # process next page

        next_page_url = response.xpath('/html/body/div[1]/center/table/tr[8]/td[2]/a/@href').extract_first()
        absolute_next_page_url = response.urljoin(next_page_url)
        request = scrapy.Request(absolute_next_page_url)
        yield request

    def parse_contractors(self, response):
        name = response.xpath(
            '/html/body/div[1]/center/table/tr[5]/td/table/tr[1]/td/b/a/@href').extract()
        contrator = {
           'name': name,

            'url': response.url}
        yield contrator

如果没有为正确的请求分页,
parse
将处理使用
start\u url
中的URL生成的请求,这意味着您需要首先输入每个类别


点击开始url后,为承包商选择url的xpath不起作用。下一页出现在承包商页面上,因此在承包商url后调用。这对你有用

def parse(self, response):
    urls = response.xpath('//table//*[@class="hiCatNaked"]').extract()

    for url in urls:
      absolute_url = response.urljoin(url)
      request = scrapy.Request(
        absolute_url, callback=self.parse_contractors)
      yield request

def parse_contractors(self, response):
    name=response.xpath('/html/body/div[1]/center/table/tr[5]/td/table/tr[1]/td/b/a/@href').extract()

    contrator = {
   'name': name,
    'url': response.url}
    yield contrator

    next_page_url = response.xpath('//a[b[contains(.,'Next')]]/@href').extract_first()
    if next_page_url:
        absolute_next_page_url = response.urljoin(next_page_url)
        yield scrapy.Request(absolute_next_page_url, callback=self.parse_contractors)

我仍然得到相同的结果请检查更新的答案,看起来您在回调方法上处理了错误的请求
def parse(self, response):
    urls = response.xpath('//table//*[@class="hiCatNaked"]').extract()

    for url in urls:
      absolute_url = response.urljoin(url)
      request = scrapy.Request(
        absolute_url, callback=self.parse_contractors)
      yield request

def parse_contractors(self, response):
    name=response.xpath('/html/body/div[1]/center/table/tr[5]/td/table/tr[1]/td/b/a/@href').extract()

    contrator = {
   'name': name,
    'url': response.url}
    yield contrator

    next_page_url = response.xpath('//a[b[contains(.,'Next')]]/@href').extract_first()
    if next_page_url:
        absolute_next_page_url = response.urljoin(next_page_url)
        yield scrapy.Request(absolute_next_page_url, callback=self.parse_contractors)