Python 刮痧;搔痒爬行“;在内部捕获异常并对Jenkins'&引用;捕获;条款

Python 刮痧;搔痒爬行“;在内部捕获异常并对Jenkins'&引用;捕获;条款,python,jenkins,scrapy,Python,Jenkins,Scrapy,我每天通过Jenkins运行scrapy,我希望例外情况通过电子邮件发送给我 这是一个示例spider: class ExceptionTestSpider(Spider): name = 'exception_test' start_urls = ['http://google.com'] def parse(self, response): raise Exception 这是.jenkins文件: #!/usr/bin/env groovy t

我每天通过Jenkins运行
scrapy
,我希望例外情况通过电子邮件发送给我

这是一个示例spider:

class ExceptionTestSpider(Spider):
    name = 'exception_test'

    start_urls = ['http://google.com']

    def parse(self, response):
        raise Exception
这是
.jenkins文件

#!/usr/bin/env groovy
try {
    node ('jenkins-small-py3.6'){
        ...
        stage('Execute Spider') {
            cd ...
            /usr/local/bin/scrapy crawl exception_test
        }
    }
} catch (exc) {
    echo "Caught: ${exc}"
    mail subject: "...",
            body: "The spider is failing",
              to: "...",
            from: "..."

    /* Rethrow to fail the Pipeline properly */
    throw exc
}
这是日志:

...
INFO:scrapy.core.engine:Spider opened
2019-08-22 10:49:49 [scrapy.core.engine] INFO: Spider opened
INFO:scrapy.extensions.logstats:Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-08-22 10:49:49 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
DEBUG:scrapy.extensions.telnet:Telnet console listening on 127.0.0.1:6023
DEBUG:scrapy.downloadermiddlewares.redirect:Redirecting (301) to <GET http://www.google.com/> from <GET http://google.com>
DEBUG:scrapy.core.engine:Crawled (200) <GET http://www.google.com/> (referer: None)
ERROR:scrapy.core.scraper:Spider error processing <GET http://www.google.com/> (referer: None)
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/twisted/internet/defer.py", line 654, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "...", line ..., in parse
    raise Exception
Exception
2019-08-22 10:49:50 [scrapy.core.scraper] ERROR: Spider error processing <GET http://www.google.com/> (referer: None)
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/twisted/internet/defer.py", line 654, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "...", line ..., in parse
    raise Exception
Exception
INFO:scrapy.core.engine:Closing spider (finished)
2019-08-22 10:49:50 [scrapy.core.engine] INFO: Closing spider (finished)
INFO:scrapy.statscollectors:Dumping Scrapy stats:
{
  ...
}
INFO:scrapy.core.engine:Spider closed (finished)
2019-08-22 10:49:50 [scrapy.core.engine] INFO: Spider closed (finished)
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
。。。
信息:刮擦。核心。引擎:蜘蛛打开
2019-08-22 10:49:49[刮屑.堆芯.发动机]信息:十字轴已打开
信息:scrapy.extensions.logstats:爬网0页(以0页/分钟的速度),爬网0项(以0项/分钟的速度)
2019-08-22 10:49:49[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
调试:scrapy.extensions.telnet:telnet控制台在127.0.0.1:6023上侦听
DEBUG:scrapy.DownloaderMiddleware.redirect:将(301)重定向到
调试:scrapy.core.engine:已爬网(200)(引用者:无)
错误:scrapy.core.scraper:Spider错误处理(referer:None)
回溯(最近一次呼叫最后一次):
文件“/usr/local/lib/python3.6/dist-packages/twisted/internet/defer.py”,第654行,在runCallbacks中
current.result=回调(current.result,*args,**kw)
解析中的文件“…”行
引发异常
例外情况
2019-08-22 10:49:50[刮芯刮片]错误:十字轴错误处理(参考:无)
回溯(最近一次呼叫最后一次):
文件“/usr/local/lib/python3.6/dist-packages/twisted/internet/defer.py”,第654行,在runCallbacks中
current.result=回调(current.result,*args,**kw)
解析中的文件“…”行
引发异常
例外情况
信息:刮擦。核心。引擎:关闭卡盘(完成)
2019-08-22 10:49:50[刮屑芯发动机]信息:关闭卡盘(已完成)
信息:scrapy.统计收集器:转储scrapy统计信息:
{
...
}
信息:刮擦。核心。发动机:十字轴关闭(完成)
2019-08-22 10:49:50[刮屑芯发动机]信息:十字轴关闭(完成)
[管道]}
[管道]//阶段
[管道]}
[管道]//节点
[管道]管道末端
完成:成功
没有邮件发送。 我相信Scrapy会在内部捕获异常,将其保存到以后的日志中,然后毫无错误地退出


如何使Jenkins获得异常?

问题是,当刮取失败时,scrapy不使用非零退出代码(src:)

正如该期评论中所说,我建议您添加一个自定义命令()