Python 用刮痧刮亚马逊

Python 用刮痧刮亚马逊,python,web-scraping,scrapy,Python,Web Scraping,Scrapy,我是scrapy的新手,我正试图从amazon.in获取不同笔记本电脑的详细信息。我尝试了这个代码,但我得到了一个错误。我已经提供了代码以及下面的错误。 你们能提出一些解决方案吗 蜘蛛网: # -*- coding: utf-8 -*- import scrapy class AmazonLaptopsSpider(scrapy.Spider): name = 'amazon_laptops' allowed_domains = ['www.amazon.in'] #

我是scrapy的新手,我正试图从amazon.in获取不同笔记本电脑的详细信息。我尝试了这个代码,但我得到了一个错误。我已经提供了代码以及下面的错误。 你们能提出一些解决方案吗

蜘蛛网:

# -*- coding: utf-8 -*-
import scrapy


class AmazonLaptopsSpider(scrapy.Spider):
    name = 'amazon_laptops'
    allowed_domains = ['www.amazon.in']
    #start_urls = ['https://www.amazon.in/s?i=computers&bbn=976392031&rh=n%3A14584413031&ref=mega_elec_s23_2_1_1_5']


    def start_requests(self):
        yield scrapy.Request(url='https://www.amazon.in/s?k=laptops&ref=nb_sb_noss_2',callback=self.parse,headers={'User-Agent':"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"})

    def parse(self, response):
        products=response.xpath("//div[@class='s-include-content-margin s-border-bottom s-latency-cf-section']/div/div[2]/div[2]/div")
        for product in products:
            link='https://www.amazon.in/'+product.xpath(".//div/div/div/div/h2/a/@href").get()
            yield{

            'name':product.xpath(".//div/div/div/div/h2/a/span/text()").get(),
            'rating':product.xpath(".//div/div/div[@class='sg-col-inner']/div[@class='a-section a-spacing-none a-spacing-top-micro']/div[@class='a-row a-size-small']/span[1]/@aria-label").get(),
            'No_of_reviewers':product.xpath(".//div/div/div/div[2]/div/span[2]/@aria-label").get(),
            'Discounted_Price':product.xpath(".//div[2]/div[1]/div/div[1]/div/div/a/span[@class='a-price']/span[@class='a-offscreen']/text()").get(),
            'Original_Price':product.xpath(".//div[2]/div[1]/div/div[1]/div/div/a/span[@class='a-price a-text-price']/span[@class='a-offscreen']/text()").get(),
            }
            yield response.follow(url=link,callback=self.parse_det,headers={'User-Agent':"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"})



        next_page=response.urljoin(response.xpath("//li[@class='a-last']/a/@href").get())

        if next_page:
            yield scrapy.Request(url=next_page,callback=self.parse,headers={'User-Agent':"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"})

    def parse_det(self,response):
        deets=response.xpath("//div[@class='column col1 ']/div/div[2]/div[@class='attrG']/div[@class='pdTab']/table/tbody")
        for det in deets:
            if det.xpath(".//tr[1]/td[@class='label']/text()").get=='Brand':
                yield{'Brand':det.xpath(".//tr[1]/td[@class='value']/text()").get()}
            if det.xpath(".///tr[4]/td[@class='label']/text()").get=='Item Weight':
                yield {'weight':det.xpath(".//tr[4]/td[@class='value']/text()").get()}
            if det.xpath(".//tr[8]/td[@class='label']/text()").get=='RAM Size':
                yield {'RAM':det.xpath(".//tr[8]/td[@class='value']/text()").get()}
            if det.xpath(".//tr[11]/td[@class='label']/text()").get=='Hard Drive Size':
                yield {'Hard disk size':det.xpath(".//tr[11]/td[@class='value']/text()").get()}
            if det.xpath(".//tr[14]/td[@class='label']/text()").get=='Processor Brand':
                yield {'Processor brand':det.xpath(".//tr[16]/td[@class='value']/text()").get()}
            if det.xpath(".//tr[18]/td[@class='label']/text()").get=='Processor Type':
                yield {'Processor Type':det.xpath(".//tr[18]/td[@class='value']/text()").get()}
            if det.xpath(".//tr[20]/td[@class='label']/text()").get=='Graphic Card Description':
                yield {'Graphic card description':det.xpath(".//tr[20]/td[@class='values']/text()").get()}
            if det.xpath(".//tr[23]/td[@class='label']/text()").get=='Screen Size':
                yield {'Screen size':det.xpath(".//tr[23]/td[@class='value']/text()").get()}
错误:

Spider error processing <GET https://www.amazon.in/Dell-3595-15-6-inch-Microsoft-Integrated/dp/B0839L8XW1/ref=sr_1_9?dchild=1&keywords=laptops&qid=1591612091&sr=8-9> (referer: https://www.amazon.in/s?k=laptops&ref=nb_sb_noss_2)
Spider错误处理(参考:https://www.amazon.in/s?k=laptops&ref=nb_sb_noss_2)
我能够从多个页面中刮取所有内容,但当scrapy访问任何笔记本电脑的链接时,即发生错误时

发生错误的部分:

 DEBUG: Scraped from <200 https://www.amazon.in/s?k=laptops&ref=nb_sb_noss_2>
{'name': 'ASUS VivoBook 15 X509FA-EJ341T 15.6-inch Laptop (8th Gen Core i3-8145U/4GB/1TB HDD/Windows 10 Home (64bit)/Intel Integrated UHD 620 Graphics), Transparent Silver', 'rating': '4.4 out of 5 stars', 'No_of_reviewers': '7', 'Discounted_Price': '₹30,900', 'Original_Price': '₹36,690'}
2020-06-08 16:02:18 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.amazon.in/HP-eq0132AU-15-6-inch-Windows-Graphics/dp/B08978XKP8/ref=sr_1_2_sspa?dchild=1&keywords=laptops&qid=1591612091&sr=8-2-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUExMEpUVThUSUJYMFMyJmVuY3J5cHRlZElkPUEwMjMzNDcwMU4zUzlMVzY1UFY2RyZlbmNyeXB0ZWRBZElkPUEwODQ0MjQxMjQzVzVCN01QNllVUCZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=> from <GET https://www.amazon.in/gp/slredirect/picassoRedirect.html/ref=pa_sp_atf_aps_sr_pg1_2?ie=UTF8&adId=A0844241243W5B7MP6YUP&url=%2FHP-eq0132AU-15-6-inch-Windows-Graphics%2Fdp%2FB08978XKP8%2Fref%3Dsr_1_2_sspa%3Fdchild%3D1%26keywords%3Dlaptops%26qid%3D1591612091%26sr%3D8-2-spons%26psc%3D1&qualifier=1591612091&id=1813870958375055&widgetName=sp_atf>
2020-06-08 16:02:19 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.amazon.in/Lenovo-Ideapad-Generation-Windows-81VD0082IN/dp/B08667RQSK/ref=sr_1_1_sspa?dchild=1&keywords=laptops&qid=1591612091&sr=8-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUExMEpUVThUSUJYMFMyJmVuY3J5cHRlZElkPUEwMjMzNDcwMU4zUzlMVzY1UFY2RyZlbmNyeXB0ZWRBZElkPUEwNjAyNTgzMldVS0Q2VjU5RUkxUSZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=> from <GET https://www.amazon.in/gp/slredirect/picassoRedirect.html/ref=pa_sp_atf_aps_sr_pg1_1?ie=UTF8&adId=A06025832WUKD6V59EI1Q&url=%2FLenovo-Ideapad-Generation-Windows-81VD0082IN%2Fdp%2FB08667RQSK%2Fref%3Dsr_1_1_sspa%3Fdchild%3D1%26keywords%3Dlaptops%26qid%3D1591612091%26sr%3D8-1-spons%26psc%3D1&qualifier=1591612091&id=1813870958375055&widgetName=sp_atf>
2020-06-08 16:02:19 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.amazon.in/Inspiron-5370-13-3-inch-i7-8550U-Graphics/dp/B07B6K4YM6/ref=sr_1_12_sspa?dchild=1&keywords=laptops&qid=1591612091&sr=8-12-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUExMEpUVThUSUJYMFMyJmVuY3J5cHRlZElkPUEwMjMzNDcwMU4zUzlMVzY1UFY2RyZlbmNyeXB0ZWRBZElkPUEwMzA0NzUxMk5NMTFNQktEWDdPWSZ3aWRnZXROYW1lPXNwX210ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=> from <GET https://www.amazon.in/gp/slredirect/picassoRedirect.html/ref=pa_sp_mtf_aps_sr_pg1_2?ie=UTF8&adId=A03047512NM11MBKDX7OY&url=%2FInspiron-5370-13-3-inch-i7-8550U-Graphics%2Fdp%2FB07B6K4YM6%2Fref%3Dsr_1_12_sspa%3Fdchild%3D1%26keywords%3Dlaptops%26qid%3D1591612091%26sr%3D8-12-spons%26psc%3D1&qualifier=1591612091&id=1813870958375055&widgetName=sp_mtf>
2020-06-08 16:02:19 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.amazon.in/Acer-14-inch-Windows-Charcoal-SF514-54T/dp/B082FHZW6V/ref=sr_1_11_sspa?dchild=1&keywords=laptops&qid=1591612091&sr=8-11-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUExMEpUVThUSUJYMFMyJmVuY3J5cHRlZElkPUEwMjMzNDcwMU4zUzlMVzY1UFY2RyZlbmNyeXB0ZWRBZElkPUEwMjc2NzQ3MlpTSTJOUUU5S0FKQSZ3aWRnZXROYW1lPXNwX210ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU=> from <GET https://www.amazon.in/gp/slredirect/picassoRedirect.html/ref=pa_sp_mtf_aps_sr_pg1_1?ie=UTF8&adId=A02767472ZSI2NQE9KAJA&url=%2FAcer-14-inch-Windows-Charcoal-SF514-54T%2Fdp%2FB082FHZW6V%2Fref%3Dsr_1_11_sspa%3Fdchild%3D1%26keywords%3Dlaptops%26qid%3D1591612091%26sr%3D8-11-spons%26psc%3D1&qualifier=1591612091&id=1813870958375055&widgetName=sp_mtf>
2020-06-08 16:02:20 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.amazon.in/Lenovo-V145-AMD-A6-Laptop-Windows-81MTA000IH/dp/B083C9RDCW/ref=sr_1_7?dchild=1&keywords=laptops&qid=1591612091&sr=8-7> (referer: https://www.amazon.in/s?k=laptops&ref=nb_sb_noss_2)
2020-06-08 16:02:20 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.amazon.in/Lenovo-V145-AMD-A6-Laptop-Windows-81MTA000IH/dp/B083C9RDCW/ref=sr_1_7?dchild=1&keywords=laptops&qid=1591612091&sr=8-7> (referer: https://www.amazon.in/s?k=laptops&ref=nb_sb_noss_2)
DEBUG:从
{'name':'ASUS VivoBook 15 X509FA-EJ341T 15.6英寸笔记本电脑(第8代Core i3-8145U/4GB/1TB HDD/Windows 10 Home(64位)/英特尔集成UHD 620图形),透明银','rating':'4.4/5星','No_of_reviewers':'7','打折价格':'₹30900“原价”:₹36,690'}
2020-06-08 16:02:18[scrapy.downloadermiddleware.redirect]调试:重定向(302)到
2020-06-08 16:02:19[scrapy.downloadermiddleware.redirect]调试:重定向(302)到
2020-06-08 16:02:19[scrapy.downloadermiddleware.redirect]调试:重定向(302)到
2020-06-08 16:02:19[scrapy.downloadermiddleware.redirect]调试:重定向(302)到
2020-06-08 16:02:20[刮屑核心引擎]调试:爬网(200)(参考:https://www.amazon.in/s?k=laptops&ref=nb_sb_noss_2)
2020-06-08 16:02:20[刮片机]错误:十字轴错误处理(参考:https://www.amazon.in/s?k=laptops&ref=nb_sb_noss_2)

您的代码中有多个问题:

收益率问题

理想情况下,您可以创建一个字典,在结果中添加您想要的所有项,然后“生成”最终的字典。你应该这样做:

item = dict()
if det.xpath(".//tr[1]/td[@class='label']/text()").get=='Brand':
    item['Brand']= det.xpath(".//tr[1]/td[@class='value']/text()").get()
elif det.xpath(".///tr[4]/td[@class='label']/text()").get=='Item Weight':
    item['weight'] =det.xpath(".//tr[4]/td[@class='value']/text()").get()
.
.
.
yield item
get中的问题

在调用get()的某些地方出现了输入错误。 语法不正确,它的
get()
,而不是
get

if det.xpath(".//tr[11]/td[@class='label']/text()").get=='Hard Drive Size':
应该是

if det.xpath(".//tr[11]/td[@class='label']/text()").get()=='Hard Drive Size':
if det.xpath(".//tr[4]/td[@class='label']/text()").get() == 'Item Weight':
错误的XPATH

这就是导致即时错误的原因。请注意,您有三个斜杠

if det.xpath(".///tr[4]/td[@class='label']/text()").get() == 'Item Weight':
应该是

if det.xpath(".//tr[11]/td[@class='label']/text()").get()=='Hard Drive Size':
if det.xpath(".//tr[4]/td[@class='label']/text()").get() == 'Item Weight':

您的代码中存在多个问题:

收益率问题

理想情况下,您可以创建一个字典,在结果中添加您想要的所有项,然后“生成”最终的字典。你应该这样做:

item = dict()
if det.xpath(".//tr[1]/td[@class='label']/text()").get=='Brand':
    item['Brand']= det.xpath(".//tr[1]/td[@class='value']/text()").get()
elif det.xpath(".///tr[4]/td[@class='label']/text()").get=='Item Weight':
    item['weight'] =det.xpath(".//tr[4]/td[@class='value']/text()").get()
.
.
.
yield item
get中的问题

在调用get()的某些地方出现了输入错误。 语法不正确,它的
get()
,而不是
get

if det.xpath(".//tr[11]/td[@class='label']/text()").get=='Hard Drive Size':
应该是

if det.xpath(".//tr[11]/td[@class='label']/text()").get()=='Hard Drive Size':
if det.xpath(".//tr[4]/td[@class='label']/text()").get() == 'Item Weight':
错误的XPATH

这就是导致即时错误的原因。请注意,您有三个斜杠

if det.xpath(".///tr[4]/td[@class='label']/text()").get() == 'Item Weight':
应该是

if det.xpath(".//tr[11]/td[@class='label']/text()").get()=='Hard Drive Size':
if det.xpath(".//tr[4]/td[@class='label']/text()").get() == 'Item Weight':