Spider使用scrapy运行,但没有数据存储到csv中

Spider使用scrapy运行,但没有数据存储到csv中,scrapy,Scrapy,我编写了一个代码,它通过网页内的链接提取数据并移动到下一页。这是quotes.toscrape.com中每个作者的关于链接 import scrapy class TestSpider(scrapy.Spider): name = 'test' allowed_domains = ['quotes.toscrape.com'] start_urls = ['http://quotes.toscrape.com',] def parse(self, resp

我编写了一个代码,它通过网页内的链接提取数据并移动到下一页。这是quotes.toscrape.com中每个作者的关于链接

  import scrapy

class TestSpider(scrapy.Spider):
    name = 'test'
    allowed_domains = ['quotes.toscrape.com']
    start_urls = ['http://quotes.toscrape.com',]

    def parse(self, response):
        linkto = response.css('div.quote > span > a::attr(href)').extract()
        for links in linkto:
            links = response.urljoin(links)
            yield scrapy.Request(url=links, callback = scrapy.parse_about)


        nextp = response.css('li.next > a::attr(href)').extract()
        if nextp:
            nextp = response.urljoin(nextp)
            yield scrapy.Request(url=nextp, callback=self.parse)



    def parse_about(self, response):
        yield {
            'date_of_birth': response.css('span.author-born-date::text').extract(),
            'author': response.css('h3.author-title::text').extract(),
        }
我在命令提示符下执行:

scrapy crawl test -o test.csv
但我得到的结果是:

019-03-20 16:36:03 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: quotestoscrape)
2019-03-20 16:36:03 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 17.5.0, Python 2.7.15 |Anaconda, Inc.| (default, Nov 13 2018, 17:33:26) [MSC v.1500 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1  11 Sep 2018), cryptography 2.5, Platform Windows-10-10.0.17134
2019-03-20 16:36:03 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'quotestoscrape.spiders', 'SPIDER_MODULES': ['quotestoscrape.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'quotestoscrape'}
2019-03-20 16:36:03 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2019-03-20 16:36:03 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-03-20 16:36:03 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-03-20 16:36:03 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-03-20 16:36:03 [scrapy.core.engine] INFO: Spider opened
2019-03-20 16:36:03 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-03-20 16:36:03 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2019-03-20 16:36:03 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None)
2019-03-20 16:36:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com> (referer: None)
2019-03-20 16:36:04 [scrapy.core.scraper] ERROR: Spider error processing <GET http://quotes.toscrape.com> (referer: None)
Traceback (most recent call last):
  File "C:\Users\kenny\Anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
    yield next(it)
  File "C:\Users\kenny\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 30, in process_spider_output
    for x in result:
  File "C:\Users\kenny\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "C:\Users\kenny\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "C:\Users\kenny\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "C:\Users\kenny\quotestoscrape\quotestoscrape\spiders\QuoteTestSpider.py", line 13, in parse
    yield scrapy.Request(url=links, callback = scrapy.parse_about)
AttributeError: 'module' object has no attribute 'parse_about'
2019-03-20 16:36:04 [scrapy.core.engine] INFO: Closing spider (finished)
2019-03-20 16:36:04 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 446,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 2701,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/404': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 3, 20, 21, 36, 4, 41000),
 'log_count/DEBUG': 3,
 'log_count/ERROR': 1,
 'log_count/INFO': 7,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/AttributeError': 1,
 'start_time': datetime.datetime(2019, 3, 20, 21, 36, 3, 468000)}
2019-03-20 16:36:04 [scrapy.core.engine] INFO: Spider closed (finished)
019-03-20 16:36:03[scrapy.utils.log]信息:scrapy 1.5.1已启动(bot:quotestoscrape)
2019-03-20 16:36:03[scrapy.utils.log]信息:版本:lxml 4.1.1.0,libxml2.9.9,cssselect 1.0.3,parsel 1.5.1,w3lib 1.20.0,Twisted 17.5.0,Python 2.7.15 | Anaconda,Inc.|(默认,2018年11月13日,17:33:26)[MSC v.1500 64位(AMD64)],pyOpenSSL 19.0.0(OpenSSL 1.1.1.1.11,2018年9月11日),2.5,平台Windows-10.1710
2019-03-20 16:36:03[scrapy.crawler]信息:覆盖的设置:{'NEWSPIDER'模块:'QuoteScrape.SPIDER','SPIDER'模块:['QuoteScrape.SPIDER'],'ROBOTSTXT'u-obe':True,'BOT'u-NAME':'QuoteScrape'}
2019-03-20 16:36:03[scrapy.middleware]信息:启用的扩展:
['scrapy.extensions.logstats.logstats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.corestats']
2019-03-20 16:36:03[scrapy.middleware]信息:启用的下载程序中间件:
['scrapy.downloaderMiddleware.robotstxt.RobotsTxtMiddleware',
'scrapy.downloaderMiddleware.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloaderMiddleware.defaultheaders.DefaultHeadersMiddleware',
'scrapy.DownloaderMiddleware.useragent.UserAgentMiddleware',
'scrapy.DownloaderMiddleware.retry.RetryMiddleware',
'scrapy.DownloaderMiddleware.redirect.MetaRefreshMiddleware',
'scrapy.DownloaderMiddleware.httpcompression.HttpCompressionMiddleware',
'scrapy.DownloaderMiddleware.redirect.RedirectMiddleware',
“scrapy.DownloaderMiddleware.cookies.CookiesMiddleware”,
'scrapy.downloadermiddleware.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddleware.stats.DownloaderStats']
2019-03-20 16:36:03[scrapy.middleware]信息:启用的蜘蛛中间件:
['scrapy.spidermiddleware.httperror.httperror中间件',
'刮皮.SpiderMiddleware.场外.场外Iddleware',
“scrapy.Spidermiddleware.referer.RefererMiddleware”,
'scrapy.spiderMiddleware.urllength.UrlLengthMiddleware',
'scrapy.spidermiddleware.depth.DepthMiddleware']
2019-03-20 16:36:03[scrapy.middleware]信息:启用的项目管道:
[]
2019-03-20 16:36:03[刮屑芯发动机]信息:十字轴已打开
2019-03-20 16:36:03[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2019-03-20 16:36:03[scrapy.extensions.telnet]调试:telnet控制台监听127.0.0.1:6023
2019-03-20 16:36:03[scrapy.core.engine]调试:爬网(404)(参考:无)
2019-03-20 16:36:03[刮屑核心引擎]调试:爬网(200)(参考:无)
2019-03-20 16:36:04[刮芯机]错误:十字轴错误处理(参考:无)
回溯(最近一次呼叫最后一次):
文件“C:\Users\kenny\Anaconda3\lib\site packages\scrapy\utils\defer.py”,第102行,在iter\u errback中
下一个(it)
文件“C:\Users\kenny\Anaconda3\lib\site packages\scrapy\spidermiddleware\offsite.py”,第30行,进程中\u spider\u输出
对于结果中的x:
文件“C:\Users\kenny\Anaconda3\lib\site packages\scrapy\spidermiddleware\referer.py”,第339行,在
返回(_set_referer(r)表示结果中的r或())
文件“C:\Users\kenny\Anaconda3\lib\site packages\scrapy\spidermiddleware\urlength.py”,第37行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“C:\Users\kenny\Anaconda3\lib\site packages\scrapy\spidermiddleware\depth.py”,第58行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“C:\Users\kenny\quotestoscrape\quotestoscrape\spiders\QuoteTestSpider.py”,第13行,在parse中
产生scrapy.Request(url=links,callback=scrapy.parse_about)
AttributeError:“模块”对象没有“parse_about”属性
2019-03-20 16:36:04[刮屑芯发动机]信息:关闭卡盘(已完成)
2019-03-20 16:36:04[scrapy.statscollectors]信息:倾销scrapy统计数据:
{'downloader/request_bytes':446,
“下载程序/请求计数”:2,
“下载器/请求\方法\计数/获取”:2,
“downloader/response_字节”:2701,
“下载程序/响应计数”:2,
“下载程序/响应状态\计数/200”:1,
“下载程序/响应状态\计数/404”:1,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2019,3,20,21,36,441000),
“日志计数/调试”:3,
“日志计数/错误”:1,
“日志计数/信息”:7,
“响应\u已收到\u计数”:2,
“调度程序/出列”:1,
“调度程序/出列/内存”:1,
“调度程序/排队”:1,
“调度程序/排队/内存”:1,
“spider_异常/属性错误”:1,
“开始时间”:datetime.datetime(2019,3,20,21,36,3468000)}
2019-03-20 16:36:04[刮屑堆芯发动机]信息:十字轴关闭(完成)
我将其移动到的csv文件为空:


请让我知道我做错了什么

根据您的日志方法,
parse\u about
未被调用,因为您试图调用
scrapy。parse\u about
而不是spider的
self。parse\u about

....
        for links in linkto:
            links = response.urljoin(links)
            yield scrapy.Request(url=links, callback = self.parse_about)

由于您的应用程序不刮取任何数据->因此会创建空的csv文件。

根据您的日志方法,
parse\u about
不会被调用,因为您试图调用
scrapy.parse\u about
而不是spider的
self.parse\u about

....
        for links in linkto:
            links = response.urljoin(links)
            yield scrapy.Request(url=links, callback = self.parse_about)
由于应用程序不刮取任何数据->因此会创建空的csv文件