不知道如何使用scrapy';项目加载器

不知道如何使用scrapy';项目加载器,scrapy,scrapy-spider,Scrapy,Scrapy Spider,我正在学习如何使用斯瓦西的物品装载器,有人能告诉我我做错了什么吗???我想提前感谢你 import scrapy from items.items import ItemsItem from scrapy.loader import ItemLoader class ItemspiderSpider(scrapy.Spider): name = 'itemspider' allowed_domains = ['yellowpages.com'] start_urls =

我正在学习如何使用斯瓦西的物品装载器,有人能告诉我我做错了什么吗???我想提前感谢你

import scrapy
from items.items import ItemsItem
from scrapy.loader import ItemLoader

class ItemspiderSpider(scrapy.Spider):
    name = 'itemspider'
    allowed_domains = ['yellowpages.com']
    start_urls = ['https://www.yellowpages.com/search?search_terms=handyman&geo_location_terms=Miami%2C+FL']

    def parse(self, response):
    #create the loader using the response
    l = ItemLoader(item=ItemsItem(), response=response)
    #create a for loop
    for listing in response.css('div.search-results.organic div.srp-listing'):
         l.add_css('Name', listing.css('a.business-name span::text').extract())
         l.add_css('Details', response.urljoin(listing.css('a.business-name::attr(href)')))
         l.add_css('WebSite', listing.css('a.track-visit-website::attr(href)').extract_first())
         l.add_css('Phones', listing.css('div.phones::text').extract())

         yield l.load_item()
当我运行代码时,我不断收到以下错误:

root@debian:~/Desktop/items/items/spiders# scrapy runspider itemspider.py -o item.csv
/usr/local/lib/python3.5/dist-packages/scrapy/spiderloader.py:37: UserWarning: There are several spiders with the same name:

  ItemspiderSpider named 'itemspider' (in items.spiders.itemspider)
  ItemspiderSpider named 'itemspider' (in items.spiders.itemspiderLog)

  This can cause unexpected behavior.
  warnings.warn(msg, UserWarning)
2017-07-04 16:33:20 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: items)
2017-07-04 16:33:20 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'items', 'FEED_FORMAT': 'csv', 'SPIDER_LOADER_WARN_ONLY': True, 'SPIDER_MODULES': ['items.spiders'], 'FEED_URI': 'item.csv', 'ROBOTSTXT_OBEY': True, 'NEWSPIDER_MODULE': 'items.spiders'}
2017-07-04 16:33:20 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats']
2017-07-04 16:33:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-07-04 16:33:20 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-07-04 16:33:20 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-07-04 16:33:20 [scrapy.core.engine] INFO: Spider opened
2017-07-04 16:33:20 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-07-04 16:33:20 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-07-04 16:33:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.yellowpages.com/robots.txt> (referer: None)
2017-07-04 16:33:23 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.yellowpages.com/search?search_terms=handyman&geo_location_terms=Miami%2C+FL> (referer: None)
2017-07-04 16:33:24 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.yellowpages.com/search?search_terms=handyman&geo_location_terms=Miami%2C+FL> (referer: None)
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/usr/local/lib/python3.5/dist-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/usr/local/lib/python3.5/dist-packages/scrapy/spidermiddlewares/referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/usr/local/lib/python3.5/dist-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/usr/local/lib/python3.5/dist-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/root/Desktop/items/items/spiders/itemspider.py", line 17, in parse
    l.add_css('Details', response.urljoin(listing.css('a.business-name::attr(href)')))
  File "/usr/local/lib/python3.5/dist-packages/scrapy/http/response/text.py", line 82, in urljoin
    return urljoin(get_base_url(self), url)
  File "/usr/lib/python3.5/urllib/parse.py", line 416, in urljoin
    base, url, _coerce_result = _coerce_args(base, url)
  File "/usr/lib/python3.5/urllib/parse.py", line 112, in _coerce_args
    raise TypeError("Cannot mix str and non-str arguments")
TypeError: Cannot mix str and non-str arguments
2017-07-04 16:33:24 [scrapy.core.engine] INFO: Closing spider (finished)
2017-07-04 16:33:24 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 503,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 52924,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 7, 4, 21, 33, 24, 121098),
 'log_count/DEBUG': 3,
 'log_count/ERROR': 1,
 'log_count/INFO': 7,
 'memusage/max': 49471488,
 'memusage/startup': 49471488,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/TypeError': 1,
 'start_time': datetime.datetime(2017, 7, 4, 21, 33, 20, 705391)}
2017-07-04 16:33:24 [scrapy.core.engine] INFO: Spider closed (finished)
root@debian:~/Desktop/items/items/spider#scrapy runspider itemsspider.py-o item.csv
/usr/local/lib/python3.5/dist packages/scrapy/spiderloader.py:37:UserWarning:有几个spider具有相同的名称:
名为“itemspider”的ItemspiderSpider(在items.spider.itemspider中)
名为“itemspider”的ItemspiderSpider(在items.spider.itemspiderLog中)
这可能会导致意外行为。
警告。警告(消息,用户警告)
2017-07-04 16:33:20[scrapy.utils.log]信息:scrapy 1.4.0已启动(机器人:项目)
2017-07-04 16:33:20[scrapy.utils.log]信息:覆盖的设置:{'BOT_NAME':'items','FEED_FORMAT':'csv','SPIDER_LOADER_WARN_ONLY':True,'SPIDER_MODULES':['items.SPIDER'],'FEED_URI':'item.csv','ROBOTSTXT_-obe':True,'NEWSPIDER_MODULE':'items.SPIDER'}
2017-07-04 16:33:20[scrapy.middleware]信息:启用的扩展:
['scrapy.extensions.corestats.corestats',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.logstats']
2017-07-04 16:33:20[scrapy.middleware]信息:启用的下载程序中间件:
['scrapy.downloaderMiddleware.robotstxt.RobotsTxtMiddleware',
'scrapy.downloaderMiddleware.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloaderMiddleware.defaultheaders.DefaultHeadersMiddleware',
'scrapy.DownloaderMiddleware.useragent.UserAgentMiddleware',
'scrapy.DownloaderMiddleware.retry.RetryMiddleware',
'scrapy.DownloaderMiddleware.redirect.MetaRefreshMiddleware',
'scrapy.DownloaderMiddleware.httpcompression.HttpCompressionMiddleware',
'scrapy.DownloaderMiddleware.redirect.RedirectMiddleware',
“scrapy.DownloaderMiddleware.cookies.CookiesMiddleware”,
'scrapy.downloadermiddleware.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddleware.stats.DownloaderStats']
2017-07-04 16:33:20[scrapy.middleware]信息:启用的蜘蛛中间件:
['scrapy.spidermiddleware.httperror.httperror中间件',
'刮皮.SpiderMiddleware.场外.场外Iddleware',
“scrapy.Spidermiddleware.referer.RefererMiddleware”,
'scrapy.spiderMiddleware.urllength.UrlLengthMiddleware',
'scrapy.spidermiddleware.depth.DepthMiddleware']
2017-07-04 16:33:20[scrapy.middleware]信息:启用的项目管道:
[]
2017-07-04 16:33:20[刮屑.堆芯.发动机]信息:十字轴已打开
2017-07-04 16:33:20[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2017-07-04 16:33:20[scrapy.extensions.telnet]调试:telnet控制台监听127.0.0.1:6023
2017-07-04 16:33:21[刮屑核心引擎]调试:爬网(200)(参考:无)
2017-07-04 16:33:23[刮屑核心引擎]调试:爬网(200)(参考:无)
2017-07-04 16:33:24[刮片机]错误:蜘蛛错误处理(参考:无)
回溯(最近一次呼叫最后一次):
文件“/usr/local/lib/python3.5/dist packages/scrapy/utils/defer.py”,第102行,在iter\u errback中
下一个(it)
文件“/usr/local/lib/python3.5/dist packages/scrapy/spidermiddleware/offsite.py”,第29行,进程中输出
对于结果中的x:
文件“/usr/local/lib/python3.5/dist packages/scrapy/spidermiddleware/referer.py”,第339行,在
返回(_set_referer(r)表示结果中的r或())
文件“/usr/local/lib/python3.5/dist packages/scrapy/spidermiddleware/urlength.py”,第37行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“/usr/local/lib/python3.5/dist packages/scrapy/spidermiddleware/depth.py”,第58行,in
返回(结果中的r表示r或()如果_过滤器(r))
解析中第17行的文件“/root/Desktop/items/items/spider/itemspider.py”
l、 添加_css('Details',response.urljoin(listing.css('a.business-name::attr(href)'))
文件“/usr/local/lib/python3.5/dist-packages/scrapy/http/response/text.py”,urljoin中的第82行
返回urljoin(获取基本url(self),url)
urljoin中的文件“/usr/lib/python3.5/urllib/parse.py”,第416行
基本,url,_强制\u结果=_强制\u参数(基本,url)
文件“/usr/lib/python3.5/urllib/parse.py”,第112行,在“强制”参数中
raise TypeError(“无法混合str和非str参数”)
TypeError:无法混合str和非str参数
2017-07-04 16:33:24[刮屑芯发动机]信息:关闭卡盘(已完成)
2017-07-04 16:33:24[scrapy.statscollectors]信息:倾销scrapy统计数据:
{'downloader/request_bytes':503,
“下载程序/请求计数”:2,
“下载器/请求\方法\计数/获取”:2,
“下载程序/响应字节”:52924,
“下载程序/响应计数”:2,
“下载程序/响应状态\计数/200”:2,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2017,7,4,21,33,24121098),
“日志计数/调试”:3,
“日志计数/错误”:1,
“日志计数/信息”:7,
“memusage/max”:49471488,
“memusage/startup”:49471488,
“响应\u已收到\u计数”:2,
“调度程序/出列”:1,
“调度程序/出列/内存”:1,
“调度程序/排队”:1,
“调度程序/排队/内存”:1,
“spider_异常/类型错误”:1,
“开始时间”:datetime.datetime(2017,7,4,21,33,20,705391)}
2017-07-04 16:33:24[刮屑芯发动机]信息:十字轴关闭(完成)

不确定发生了什么这实际上是我第一次尝试使用ItemLoaders

您的代码有一些问题:

  • response.urljoin()
    需要单个字符串作为参数,而不是列表。您正在传递
    listing.css()
    的结果,这是一个
    选择器列表
    。您可以使用
    response.urljoin(listing.css('a.business-name::attr(href')).extract_first())
  • 您需要在每个循环迭代中实例化一个项目加载器,否则,您将为单个项目的每个字段累积值
  • 您正在使用
    。添加带有某些值的\u css()
    import scrapy
    from items.items import ItemsItem
    from scrapy.loader import ItemLoader
    
    class ItemspiderSpider(scrapy.Spider):
        name = 'itemspider'
        allowed_domains = ['yellowpages.com']
        start_urls = ['https://www.yellowpages.com/search?search_terms=handyman&geo_location_terms=Miami%2C+FL']
    
        def parse(self, response):
    
          for listing in response.css('div.search-results.organic div.srp-listing'):
    
             # create the loader using the SELECTOR, inside the loop
             l = ItemLoader(item=ItemsItem())
    
             # use .add_value() since we pass the extraction result directly
             l.add_value('Name', listing.css('a.business-name span::text').extract())
    
             # pass a single value to response.urljoin()
             l.add_value('Details',
                         response.urljoin(
                             listing.css('a.business-name::attr(href)').extract_first()
                         ))
             l.add_value('WebSite', listing.css('a.track-visit-website::attr(href)').extract_first())
             l.add_value('Phones', listing.css('div.phones::text').extract())
    
             yield l.load_item()
    
    import scrapy
    from items.items import ItemsItem
    from scrapy.loader import ItemLoader
    
    class ItemspiderSpider(scrapy.Spider):
        name = 'itemspider'
        allowed_domains = ['yellowpages.com']
        start_urls = ['https://www.yellowpages.com/search?search_terms=handyman&geo_location_terms=Miami%2C+FL']
    
        def parse(self, response):
    
            for listing in response.css('div.search-results.organic div.srp-listing'):
    
                # pass the 'listing' selector to the item loader
                # so that CSS selection is relative to it
                l = ItemLoader(ItemsItem(), selector=listing)            
    
                l.add_css('Name', 'a.business-name span::text')
                l.add_css('Details', 'a.business-name::attr(href)')
                l.add_css('WebSite', 'a.track-visit-website::attr(href)')
                l.add_css('Phones', 'div.phones::text')
    
                yield l.load_item()