Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/17.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Can';不要从网页中获取所有标题_Python_Python 3.x_Web Scraping_Scrapy - Fatal编程技术网

Python Can';不要从网页中获取所有标题

Python Can';不要从网页中获取所有标题,python,python-3.x,web-scraping,scrapy,Python,Python 3.x,Web Scraping,Scrapy,我试图从这里递归地解析所有的类别和它们的嵌套类别,这最终导致了这样的结果,最后我想从这里获取所有的产品标题 脚本可以按照上述步骤操作但是,当要从遍历所有后续页面中获取所有标题时,脚本获得的内容少于当前页面的内容。 这是我写的: class mySpider(scrapy.Spider): name = "myspider" start_urls = ['https://www.phoenixcontact.com/online/portal/gb?1dmy&

我试图从这里递归地解析所有的类别和它们的嵌套类别,这最终导致了这样的结果,最后我想从这里获取所有的产品标题

脚本可以按照上述步骤操作但是,当要从遍历所有后续页面中获取所有标题时,脚本获得的内容少于当前页面的内容。

这是我写的:

class mySpider(scrapy.Spider):
    name = "myspider"

    start_urls = ['https://www.phoenixcontact.com/online/portal/gb?1dmy&urile=wcm%3apath%3a/gben/web/main/products/subcategory_pages/Cables_P-10/e3a9792d-bafa-4e89-8e3f-8b1a45bd2682']
    headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36"}

    def parse(self,response):
        cookie = response.headers.getlist('Set-Cookie')[1].decode().split(";")[0]
        for item in response.xpath("//div[./h3[contains(.,'Category')]]/ul/li/a/@href").getall():
            item_link = response.urljoin(item.strip())
            if "/products/list_pages/" in item_link:
                yield scrapy.Request(item_link,headers=self.headers,meta={'cookiejar': cookie},callback=self.parse_all_links)
            else:
                yield scrapy.Request(item_link,headers=self.headers,meta={'cookiejar': cookie},callback=self.parse)


    def parse_all_links(self,response):
        for item in response.css("[class='pxc-sales-data-wrp'][data-product-key] h3 > a[href][onclick]::attr(href)").getall():
            target_link = response.urljoin(item.strip())
            yield scrapy.Request(target_link,headers=self.headers,meta={'cookiejar': response.meta['cookiejar']},callback=self.parse_main_content)

        next_page = response.css("a.pxc-pager-next::attr(href)").get()
        if next_page:
            base_url = response.css("base::attr(href)").get()
            next_page_link = urljoin(base_url,next_page)
            yield scrapy.Request(next_page_link,headers=self.headers,meta={'cookiejar': response.meta['cookiejar']},callback=self.parse_all_links)


    def parse_main_content(self,response):
        item = response.css("h1::text").get()
        print(item)
如何获得该类别中所有可用的标题


脚本每次运行时都会得到不同数量的结果。

您的主要问题是需要对每个
“/products/list\u pages/”
使用单独的
,才能正确获取下一页。我为此使用了一个类变量
cookie
(参见我的代码),并多次得到相同的结果(4293项)

这是我的代码(我不下载产品页面(只需从产品列表中阅读产品标题):

class mySpider(scrapy.Spider):
    name = "phoenixcontact"

    start_urls = ['https://www.phoenixcontact.com/online/portal/gb?1dmy&urile=wcm%3apath%3a/gben/web/main/products/subcategory_pages/Cables_P-10/e3a9792d-bafa-4e89-8e3f-8b1a45bd2682']
    headers = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36"}
    cookie = 1

    def parse(self,response):
        # cookie = response.headers.getlist('Set-Cookie')[1].decode().split(";")[0]
        for item in response.xpath("//div[./h3[contains(.,'Category')]]/ul/li/a/@href").getall():
            item_link = response.urljoin(item.strip())
            if "/products/list_pages/" in item_link:
                cookie = self.cookie
                self.cookie += 1
                yield scrapy.Request(item_link,headers=self.headers,meta={'cookiejar': cookie},callback=self.parse_all_links, cb_kwargs={'page_number': 1})
            else:
                yield scrapy.Request(item_link,headers=self.headers,callback=self.parse)


    def parse_all_links(self,response, page_number):
        # if page_number > 1:
        #     with open("Samples/Page.htm", "wb") as f:
        #         f.write(response.body)
        # for item in response.css("[class='pxc-sales-data-wrp'][data-product-key] h3 > a[href][onclick]::attr(href)").getall():
        for item in response.xpath('//div[@data-product-key]//h3//a'):
            target_link = response.urljoin(item.xpath('./@href').get())
            item_title = item.xpath('./text()').get()
            yield {'title': item_title}
            # yield scrapy.Request(target_link,headers=self.headers,meta={'cookiejar': response.meta['cookiejar']},callback=self.parse_main_content)

        next_page = response.css("a.pxc-pager-next::attr(href)").get()
        if next_page:
            base_url = response.css("base::attr(href)").get()
            next_page_link = response.urljoin(next_page)
            yield scrapy.Request(next_page_link,headers=self.headers,meta={'cookiejar': response.meta['cookiejar']},callback=self.parse_all_links, cb_kwargs={'page_number': page_number + 1})