Python 用于AJAX内容的Scrapy爬行蜘蛛

Python 用于AJAX内容的Scrapy爬行蜘蛛,python,web-scraping,scrapy,Python,Web Scraping,Scrapy,我正在尝试为新闻文章爬网一个站点。我的起始url包含: (1) 每篇文章的链接: 及 (2) 一个“更多”按钮,用于进行AJAX调用,在同一开始url中动态加载更多文章: AJAX调用的一个参数是“page”,每次单击“More”按钮时,它都会递增。例如,单击一次“更多”将加载额外的n篇文章,并在“更多”按钮onClick事件中更新页面参数,以便下次单击“更多”时,将加载两篇文章中的“页面”(假设最初加载了“页面”0,第一次单击时加载了“页面”1) 对于每一个“页面”,我想使用规则刮取每篇文章的

我正在尝试为新闻文章爬网一个站点。我的起始url包含:

(1) 每篇文章的链接:

(2) 一个“更多”按钮,用于进行AJAX调用,在同一开始url中动态加载更多文章:

AJAX调用的一个参数是“page”,每次单击“More”按钮时,它都会递增。例如,单击一次“更多”将加载额外的n篇文章,并在“更多”按钮onClick事件中更新页面参数,以便下次单击“更多”时,将加载两篇文章中的“页面”(假设最初加载了“页面”0,第一次单击时加载了“页面”1)

对于每一个“页面”,我想使用规则刮取每篇文章的内容,但我不知道有多少个“页面”,我不想选择任意的m(例如10k)。我似乎不知道如何设置这个

从这个问题出发,我尝试创建一个潜在URL的URL列表,但在解析以前的URL并确保其中包含爬行蜘蛛的新闻链接后,我无法确定如何以及从池中发送新URL。我的规则将响应发送到parse_items回调,在那里解析文章内容

在应用规则和调用parse_项之前,是否有办法观察links页面的内容(类似于BaseSpider示例),以便我知道何时停止爬行

简化代码(为了清晰起见,我删除了几个正在解析的字段):


爬行蜘蛛可能对您的用途太有限了。如果您需要很多逻辑,通常最好从Spider继承

Scrapy提供CloseSpider异常,在某些情况下需要停止解析时可以引发该异常。您正在爬网的页面返回一条消息“您的股票上没有焦点文章”,当您超过最大页面时,您可以检查此消息,并在出现此消息时停止迭代

在你的情况下,你可以这样做:

from scrapy.spider import Spider
from scrapy.http import Request
from scrapy.exceptions import CloseSpider

class ExampleSite(Spider):
    name = "so"
    download_delay = 0.1

    more_pages = True
    next_page = 1

    start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
                      '&slugs=tsla&is_symbol_page=true']

    allowed_domains = ['example.com']

    def create_ajax_request(self, page_number):
        """
        Helper function to create ajax request for next page.
        """
        ajax_template = 'http://example.com/account/ajax_headlines_content?type=in_focus_articles&page={pagenum}&slugs=tsla&is_symbol_page=true'

        url = ajax_template.format(pagenum=page_number)
        return Request(url, callback=self.parse)

    def parse(self, response):
        """
        Parsing of each page.
        """
        if "There are no Focus articles on your stocks." in response.body:
            self.log("About to close spider", log.WARNING)
            raise CloseSpider(reason="no more pages to parse")


        # there is some content extract links to articles
        sel = Selector(response)
        links_xpath = "//div[@class='symbol_article']/a/@href"
        links = sel.xpath(links_xpath).extract()
        for link in links:
            url = urljoin(response.url, link)
            # follow link to article
            # commented out to see how pagination works
            #yield Request(url, callback=self.parse_item)

        # generate request for next page
        self.next_page += 1
        yield self.create_ajax_request(self.next_page)

    def parse_item(self, response):
        """
        Parsing of each article page.
        """
        self.log("Scraping: %s" % response.url, level=log.INFO)

        hxs = Selector(response)

        item = NewsItem()

        item['url'] = response.url
        item['source'] = 'example'
        item['title'] = hxs.xpath('//title/text()')
        item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

        yield item

非常感谢。我对Scrapy是个新手,我认为爬行蜘蛛是我的选择。这个例子给了我构建的基础。
from scrapy.spider import Spider
from scrapy.http import Request
from scrapy.exceptions import CloseSpider

class ExampleSite(Spider):
    name = "so"
    download_delay = 0.1

    more_pages = True
    next_page = 1

    start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
                      '&slugs=tsla&is_symbol_page=true']

    allowed_domains = ['example.com']

    def create_ajax_request(self, page_number):
        """
        Helper function to create ajax request for next page.
        """
        ajax_template = 'http://example.com/account/ajax_headlines_content?type=in_focus_articles&page={pagenum}&slugs=tsla&is_symbol_page=true'

        url = ajax_template.format(pagenum=page_number)
        return Request(url, callback=self.parse)

    def parse(self, response):
        """
        Parsing of each page.
        """
        if "There are no Focus articles on your stocks." in response.body:
            self.log("About to close spider", log.WARNING)
            raise CloseSpider(reason="no more pages to parse")


        # there is some content extract links to articles
        sel = Selector(response)
        links_xpath = "//div[@class='symbol_article']/a/@href"
        links = sel.xpath(links_xpath).extract()
        for link in links:
            url = urljoin(response.url, link)
            # follow link to article
            # commented out to see how pagination works
            #yield Request(url, callback=self.parse_item)

        # generate request for next page
        self.next_page += 1
        yield self.create_ajax_request(self.next_page)

    def parse_item(self, response):
        """
        Parsing of each article page.
        """
        self.log("Scraping: %s" % response.url, level=log.INFO)

        hxs = Selector(response)

        item = NewsItem()

        item['url'] = response.url
        item['source'] = 'example'
        item['title'] = hxs.xpath('//title/text()')
        item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')

        yield item