Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/reactjs/25.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何覆盖scrapy 1.7.3中的文件路径函数?_Python_Scrapy_Scrapy Pipeline - Fatal编程技术网

Python 如何覆盖scrapy 1.7.3中的文件路径函数?

Python 如何覆盖scrapy 1.7.3中的文件路径函数?,python,scrapy,scrapy-pipeline,Python,Scrapy,Scrapy Pipeline,在不覆盖file_path函数的情况下,爬行器下载所有具有默认“request URL hash”文件名的图像。但是,当我试图覆盖函数时,它就是不起作用。默认输出属性images中没有任何内容 我尝试了settings.py中IMAGES_STORE变量的相对路径和绝对路径,以及file_path函数,但都没有成功。即使我用完全相同的默认文件路径函数覆盖文件路径函数,图像也不会下载 任何帮助都将不胜感激 设置.py BOT_NAME = 'HomeApp2' SPIDER_MODULES =

在不覆盖file_path函数的情况下,爬行器下载所有具有默认“request URL hash”文件名的图像。但是,当我试图覆盖函数时,它就是不起作用。默认输出属性images中没有任何内容

我尝试了settings.py中IMAGES_STORE变量的相对路径和绝对路径,以及file_path函数,但都没有成功。即使我用完全相同的默认文件路径函数覆盖文件路径函数,图像也不会下载

任何帮助都将不胜感激

设置.py

BOT_NAME = 'HomeApp2'

SPIDER_MODULES = ['HomeApp2.spiders']
NEWSPIDER_MODULE = 'HomeApp2.spiders'

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.93 Safari/537.36'

# ScrapySplash settings
SPLASH_URL = 'http://192.168.99.100:8050'
DOWNLOADER_MIDDLEWARES = {
        'scrapy_splash.SplashCookiesMiddleware': 723,
        'scrapy_splash.SplashMiddleware': 725,
        'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
        }
SPIDER_MIDDLEWARES = {
        'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
        }
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'HomeApp2.pipelines.DuplicatesPipeline': 250,
    'HomeApp2.pipelines.ProcessImagesPipeline': 251,
    'HomeApp2.pipelines.HomeApp2Pipeline': 300,
}

IMAGES_STORE = 'files'
管道.py

import json
import scrapy
from scrapy.exceptions import DropItem  
from scrapy.pipelines.images import ImagesPipeline

class DuplicatesPipeline(object):  
    def __init__(self): 
        self.sku_seen = set() 

    def process_item(self, item, spider): 
        if item['sku'] in self.sku_seen: 
            raise DropItem("Repeated item found: %s" % item) 
        else: 
            self.sku_seen.add(item['sku']) 
            return item

class ProcessImagesPipeline(ImagesPipeline):

    '''
    def file_path(self, request):
        print('!!!!!!!!!!!!!!!!!!!!!!!!!')
        sku = request.meta['sku']
        num = request.meta['num']
        return '%s/%s.jpg' % (sku, num)
    '''

    def get_media_requests(self, item, info):
        print('- - - - - - - - - - - - - - - - - -')
        sku = item['sku']
        for num, image_url in item['image_urls'].items():
            yield scrapy.Request(url=image_url, meta = {'sku': sku,
                                                        'num': num})

class HomeApp2Pipeline(object):
    def __init__(self):
        self.file = open('items.jl', 'w')

    def process_item(self, item, spider):
        line = json.dumps(dict(item)) + '\n'
        self.file.write(line)
        return item
App2.py

import scrapy
from scrapy_splash import SplashRequest
from HomeApp2.items import HomeAppItem

class AppScrape2Spider(scrapy.Spider):
    name = 'AppScrape2'

    def start_requests(self):
        yield SplashRequest(
            url = 'https://www.appliancesonline.com.au/product/samsung-sr400lstc-400l-top-mount-fridge?sli_sku_jump=1',
            callback = self.parse,
        )

    def parse(self, response):

        item = HomeAppItem()

        product = response.css('aol-breadcrumbs li:nth-last-of-type(1) .breadcrumb-link ::text').extract_first().rsplit(' ', 1)
        if product is None:
            return {}
        item['sku'] = product[-1]
        item['image_urls'] = {}

        root_url = 'https://www.appliancesonline.com.au'
        product_picture_count = 0
        for pic in response.css('aol-product-media-gallery-main-image-portal img.image'):
            product_picture_count = product_picture_count + 1
            item['image_urls']['p'+str(product_picture_count)] = (
            root_url + pic.css('::attr(src)').extract_first())

        feature_count = 0
        for feat in response.css('aol-product-features .feature'):
            feature_count = feature_count + 1
            item['image_urls']['f'+str(feature_count)] = (
            root_url + feat.css('.feature-image ::attr(src)').extract_first())

        yield item
items.py

import scrapy

class HomeAppItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()

    sku = scrapy.Field()
    image_urls = scrapy.Field()
    images = scrapy.Field()

    pass

经过反复试验,我找到了解决办法。它只是将其余的参数添加到file_path方法中

改变

def file_path(self, request):


我的原始代码似乎错误地覆盖了该方法,导致对该方法的调用失败。

也许它应该从类
FilesPipeline
继承-我有一个旧的示例代码,它更改文件名,并使用
FilesPipeline
-顺便说一句:此代码不需要项目-它在一个文件中包含所有元素并运行作为独立的script.BTW,在我的代码中的注释中,我发现它必须以
{'image\u url':[url]}
的形式发送图像,所以问题可能是您将url放在
['image\u url']['product']
中。或者有些东西在Scrapy中更改了多年。顺便说一句:在代码中,您可以使用
print()
logging
来查看它是否被执行。您是否检查过是否调用过
file\u path
?如果引发异常怎么办?@furas我尝试从FilePipeline继承,但没有成功,同样的问题也会出现。至于嵌套的dict,我删除了它并再次尝试了代码,当我重写file_path方法时仍然不起作用。文档中有一个bug。
def file_path(self, request, response=None, info=None):