Python Scrapy未在终端显示产量结果

Python Scrapy未在终端显示产量结果,python,web-scraping,scrapy,Python,Web Scraping,Scrapy,在运行/保存脚本后,如下所示,我尝试在终端中查看结果,但没有成功 代码非常简单,但我似乎找不到解决方法 import scrapy class TickersSpider(scrapy.Spider): name = 'tickers' allowed_domains = ['www.seekingalpha.com/'] start_urls = ['https://seekingalpha.com/market-news/on-the-move'] def

在运行/保存脚本后,如下所示,我尝试在终端中查看结果,但没有成功

代码非常简单,但我似乎找不到解决方法

import scrapy

class TickersSpider(scrapy.Spider):
    name = 'tickers'
    allowed_domains = ['www.seekingalpha.com/']
    start_urls = ['https://seekingalpha.com/market-news/on-the-move']

    def parse(self, response):
        articles_all = response.xpath('//div[@class="title"]/a/text()').getall()
        articles_gainers = response.path('//div[@class="title"]/a[contains(text(), "remarket gainers")]/text()').getall()
    
        yield {
            'articles': articles_all,
            'articles_gainers': articles_gainers
            }
        
我还仔细检查了我是否在正确的目录下运行。 这是我在终端运行
scrapy crawl tickers
时显示的内容:

2020-07-25 16:53:35 [scrapy.utils.log] INFO: Scrapy 2.2.0 started (bot: seekingalpha)
2020-07-25 16:53:35 [scrapy.utils.log] INFO: Versions: lxml 4.5.2.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1g  21 Apr 2020), cryptography 3.0, Platform Windows-10-10.0.18362-SP0
2020-07-25 16:53:35 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2020-07-25 16:53:35 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'seekingalpha',
 'NEWSPIDER_MODULE': 'seekingalpha.spiders',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['seekingalpha.spiders']}
2020-07-25 16:53:35 [scrapy.extensions.telnet] INFO: Telnet Password: 2cb47f969c26a413
2020-07-25 16:53:35 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2020-07-25 16:53:36 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-07-25 16:53:36 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-07-25 16:53:36 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-07-25 16:53:36 [scrapy.core.engine] INFO: Spider opened
2020-07-25 16:53:36 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-07-25 16:53:36 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-07-25 16:53:36 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://seekingalpha.com/robots.txt> (referer: None)
2020-07-25 16:53:36 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://seekingalpha.com/market-news/on-the-move> (referer: None)
2020-07-25 16:53:37 [scrapy.core.scraper] ERROR: Spider error processing <GET https://seekingalpha.com/market-news/on-the-move> (referer: None)
Traceback (most recent call last):
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\utils\defer.py", line 120, in iter_errback
    yield next(it)
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\utils\python.py", line 346, in __next__
    return next(self.data)
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\utils\python.py", line 346, in __next__
    return next(self.data)
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 340, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "C:\Users\MICRO\Anaconda3\envs\virtual_workspace\lib\site-packages\scrapy\core\spidermw.py", line 64, in _evaluate_iterable
    for r in iterable:
  File "C:\Users\MICRO\PythonDir\projects\seekingalpha\seekingalpha\spiders\tickers.py", line 11, in parse
    articles_gainers = response.path('//div[@class="title"]/a[contains(text(), "remarket gainers")]').getall()
AttributeError: 'HtmlResponse' object has no attribute 'path'
2020-07-25 16:53:37 [scrapy.core.engine] INFO: Closing spider (finished)
2020-07-25 16:53:37 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 511,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 158291,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/403': 1,
 'elapsed_time_seconds': 0.987867,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 7, 25, 19, 53, 37, 13084),
 'log_count/DEBUG': 2,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/403': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'spider_exceptions/AttributeError': 1,
 'start_time': datetime.datetime(2020, 7, 25, 19, 53, 36, 25217)}
2020-07-25 16:53:37 [scrapy.core.engine] INFO: Spider closed (finished)
            
2020-07-25 16:53:35[scrapy.utils.log]信息:scrapy 2.2.0已启动(bot:seekingalpha)
2020-07-25 16:53:35[scrapy.utils.log]信息:版本:lxml 4.5.2.0,libxml2.9.10,cssselect 1.1.0,parsel 1.6.0,w3lib 1.22.0,Twisted 20.3.0,Python 3.7.7(默认,2020年5月6日,11:45:54)[MSC v.1916 64位(AMD64)],pyOpenSSL 19.1.0(OpenSSL 1.1.1g,2020年4月21日),密码学3.0,平台Windows-10-10.0.18362-SP0
2020-07-25 16:53:35[scrapy.utils.log]调试:使用reactor:twisted.internet.selectreactor.selectreactor
2020-07-25 16:53:35[抓取程序]信息:覆盖设置:
{'BOT_NAME':'seekingalpha',
“NEWSPIDER_模块”:“seekingalpha.spider”,
“机器人服从”:没错,
“SPIDER_模块”:[seekingalpha.SPIDER']}
2020-07-25 16:53:35[scrapy.extensions.telnet]信息:telnet密码:2cb47f969c26a413
2020-07-25 16:53:35[scrapy.middleware]信息:启用的扩展:
['scrapy.extensions.corestats.corestats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.logstats']
2020-07-25 16:53:36[剪贴簿中间件]信息:启用的下载程序中间件:
['scrapy.downloaderMiddleware.robotstxt.RobotsTxtMiddleware',
'scrapy.downloaderMiddleware.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloaderMiddleware.defaultheaders.DefaultHeadersMiddleware',
'scrapy.DownloaderMiddleware.useragent.UserAgentMiddleware',
'scrapy.DownloaderMiddleware.retry.RetryMiddleware',
'scrapy.DownloaderMiddleware.redirect.MetaRefreshMiddleware',
'scrapy.DownloaderMiddleware.httpcompression.HttpCompressionMiddleware',
'scrapy.DownloaderMiddleware.redirect.RedirectMiddleware',
“scrapy.DownloaderMiddleware.cookies.CookiesMiddleware”,
'scrapy.downloadermiddleware.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddleware.stats.DownloaderStats']
2020-07-25 16:53:36[scrapy.中间件]信息:启用的蜘蛛中间件:
['scrapy.spidermiddleware.httperror.httperror中间件',
'刮皮.SpiderMiddleware.场外.场外Iddleware',
“scrapy.Spidermiddleware.referer.RefererMiddleware”,
'scrapy.spiderMiddleware.urllength.UrlLengthMiddleware',
'scrapy.spidermiddleware.depth.DepthMiddleware']
2020-07-25 16:53:36[scrapy.middleware]信息:启用的项目管道:
[]
2020-07-25 16:53:36[刮屑.堆芯.发动机]信息:蜘蛛网打开
2020-07-25 16:53:36[scrapy.extensions.logstats]信息:抓取0页(以0页/分钟的速度),抓取0项(以0项/分钟的速度)
2020-07-25 16:53:36[scrapy.extensions.telnet]信息:telnet控制台监听127.0.0.1:6023
2020-07-25 16:53:36[scrapy.core.engine]调试:爬网(403)(参考:无)
2020-07-25 16:53:36[碎片堆芯引擎]调试:爬网(200)(参考:无)
2020-07-25 16:53:37[刮板式堆芯刮板]错误:十字轴错误处理(参考:无)
回溯(最近一次呼叫最后一次):
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\utils\defer.py”,第120行,在iter\u errback中
下一个(it)
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\utils\python.py”,第346行,在下一个__
返回下一个(self.data)
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\utils\python.py”,第346行,在下一个__
返回下一个(self.data)
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\core\spidermw.py”,第64行,可编辑
对于iterable中的r:
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\spidermiddleware\offsite.py”,第29行,进程中\u spider\u输出
对于结果中的x:
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\core\spidermw.py”,第64行,可编辑
对于iterable中的r:
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\spidermiddleware\referer.py”,第340行,在
返回(_set_referer(r)表示结果中的r或())
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\core\spidermw.py”,第64行,可编辑
对于iterable中的r:
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\spidermiddleware\urlength.py”,第37行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\core\spidermw.py”,第64行,可编辑
对于iterable中的r:
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\spidermiddleware\depth.py”,第58行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“C:\Users\MICRO\Anaconda3\envs\virtual\u workspace\lib\site packages\scrapy\core\spidermw.py”,第64行,可编辑
对于iterable中的r:
文件“C:\Users\MICRO\PythonDir\projects\seekingalpha\seekingalpha\spiders\tickers.py”,解析中的第11行
articles_gainers=response.path('//div[@class=“title”]/a[contains(text(),“remarket gainers”)))。getall()
AttributeError:'HtmlResponse'对象没有属性'path'
2020-07-25 16:53:37[刮屑核心发动机]信息:关闭卡盘(完成)
2020-07-25 16:53:37[斯拉比统计局]信息:倾销斯拉比统计局:
{'downloader/request_bytes':511,
“下载程序/请求计数”:2,
“下载器/请求\方法\计数/获取”:2,
“downloader/response_字节”:158291,
“下载程序/响应计数”:2,
“下载程序/响应状态\计数/200”:1,
“下载器/响应\状态\计数/403”:1,
“已用时间秒”:0.987867,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2020,7,25,19,53,37,13084),
“日志计数/调试”:2,
“我
    articles_gainers = response.path('//div[@class="title"]/a[contains(text(), "remarket gainers")]/text()').getall()
AttributeError: 'HtmlResponse' object has no attribute 'path'