Python 正在尝试在刮削时单击“下一步”按钮

Python 正在尝试在刮削时单击“下一步”按钮,python,scrapy,Python,Scrapy,我有一个抓取程序,我需要在抓取的时候点击下一步按钮,一周前我在这里问了一个问题,关于如何做到这一点,得到了一些很好的回答,但我得到的答案代码只起了部分作用。它会刮到第1页和第2页,但它没有继续到第3页,而是跳到最后一页,第10页,我不知道为什么 import csv from scrapy.spiders import Spider from scrapy_splash import SplashRequest from ..items import GameItem def process_c

我有一个抓取程序,我需要在抓取的时候点击下一步按钮,一周前我在这里问了一个问题,关于如何做到这一点,得到了一些很好的回答,但我得到的答案代码只起了部分作用。它会刮到第1页和第2页,但它没有继续到第3页,而是跳到最后一页,第10页,我不知道为什么

import csv
from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from ..items import GameItem
def process_csv(csv_file):
    data = []
    reader = csv.reader(csv_file)
    next(reader)
    for fields in reader:
        if fields[0] != "":
            url = fields[0]
        else:
            continue # skip the whole row if the url column is empty
        if fields[1] != "":
            ip = "http://" + fields[1] + ":8050" # adding http and port because this is the needed scheme
        if fields[2] != "":
            useragent = fields[2]
        data.append({"url": url, "ip": ip, "ua": useragent})
    return data
class MySpider(Spider):
    name = 'splash_spider'  # Name of Spider

    # notice that we don't need to define start_urls
    # just make sure to get all the urls you want to scrape inside start_requests function

    # getting all the url + ip address + useragent pairs then request them
    def start_requests(self):

        # get the file path of the csv file that contains the pairs from the settings.py
        with open(self.settings["PROXY_CSV_FILE"], mode="r") as csv_file:
           # requests is a list of dictionaries like this -> {url: str, ua: str, ip: str}
            requests = process_csv(csv_file)

        for req in requests:
            # no need to create custom middlewares  # just pass useragent using the headers param, and pass proxy using the meta param

            yield SplashRequest(url=req["url"], callback=self.parse, args={"wait": 3},
                    headers={"User-Agent": req["ua"]},
                    splash_url = req["ip"],
                    )

    # Scraping
    def parse(self, response):
        item = GameItem()
        for game in response.css("tr"):
            # Card Name
            yield {
                    'card_name':  game.css("a.card_popup::text").get(),
                    }

           next_page = response.css('table+ div a:nth-child(8)::attr("href")').get()
            if next_page is not None:
                yield response.follow(next_page, self.parse)
更新#1

SplashSpider.py

import csv
from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from ..items import GameItem

# process the csv file so the url + ip address + useragent pairs are the same as defined in the file # returns a list of dictionaries, example:
# [ {'url': 'http://www.starcitygames.com/catalog/category/Rivals%20of%20Ixalan',
#    'ip': 'http://204.152.114.244:8050',
#    'ua': "Mozilla/5.0 (BlackBerry; U; BlackBerry 9320; en-GB) AppleWebKit/534.11"},
#    ...
# ]
def process_csv(csv_file):
    data = []
    reader = csv.reader(csv_file)
    next(reader)
    for fields in reader:
        if fields[0] != "":
            url = fields[0]
        else:
            continue # skip the whole row if the url column is empty
        if fields[1] != "":
            ip = "http://" + fields[1] + ":8050" # adding http and port because this is the needed scheme
        if fields[2] != "":
            useragent = fields[2]
        data.append({"url": url, "ip": ip, "ua": useragent})
    return data


class MySpider(Spider):
    name = 'splash_spider'  # Name of Spider

    # notice that we don't need to define start_urls
    # just make sure to get all the urls you want to scrape inside start_requests function

    # getting all the url + ip address + useragent pairs then request them
    def start_requests(self):

        # get the file path of the csv file that contains the pairs from the settings.py
        with open(self.settings["PROXY_CSV_FILE"], mode="r") as csv_file:
           # requests is a list of dictionaries like this -> {url: str, ua: str, ip: str}
            requests = process_csv(csv_file)

        for req in requests:
            # no need to create custom middlewares
            # just pass useragent using the headers param, and pass proxy using the meta param

            yield SplashRequest(url=req["url"], callback=self.parse, args={"wait": 3},
                    headers={"User-Agent": req["ua"]},
                    splash_url = req["ip"],
                    )
    # Scraping
    def parse(self, response):
        item = GameItem()
        for game in response.css("tr[class^=deckdbbody]"):
            # Card Name
            item["card_name"] = game.css("a.card_popup::text").extract_first()
            item["stock"] = game.css("td[class^=deckdbbody].search_results_8::text").extract_first()
            item["price"] = game.css("td[class^=deckdbbody].search_results_9::text").extract_first()

            yield item
        next_page = response.css('#content > div:last-of-type > a[href]:last-of-type').get()
        if next_page is not None:
            yield response.follow(next_page_url, self.parse)
更新#2(两者都不能正常工作)

next\u page=response.css('table+diva:nth child(8)::attr(“href”)).get()

您肯定不想要第n个子项(8),您想要的是最后一个
div
及其最后一个
a
,其中包含
href
属性,即:

response.css("#content > div:last-of-type > a[href]:last-of-type')

如果您想更加勤奋,您应该检查匹配的
的文本,以确保它包含短语
Next

这里是正确的代码,需要使用xpath而不是css。现在一切正常

next_page = response.xpath('//a[contains(., "- Next>>")]/@href').get()
        if next_page is not None:
            yield response.follow(next_page, self.parse)

仍然不起作用。我照你说的做了,但不会超过第一页。在我的问题中添加了更新的代码。你能看到我仍然做错了什么吗?如果你使用的是你发布的文字代码,那么你需要,因为你正在分配
next\u page
,但是使用
next\u page\u url
。我猜修复实际上介于这两者之间,因为
.get()
将返回
而不是
href
,所以添加
next\u page\u url=next\u page.xpath(@href”).get()
,这只是我在复制代码时犯的一个错误,仍然无法修复问题。只做第1页。为下一个声明发布了新代码。
response.css("#content > div:last-of-type > a[href]:last-of-type')
next_page = response.xpath('//a[contains(., "- Next>>")]/@href').get()
        if next_page is not None:
            yield response.follow(next_page, self.parse)