Scrapy 如何下载芹菜中的刮痧文件?

Scrapy 如何下载芹菜中的刮痧文件?,scrapy,celery,Scrapy,Celery,我需要下载一个大文件(+/-100MB)。如果我将其作为脚本运行,则一切正常,但当它由芹菜任务执行时,我有以下日志: [INFO/ForkPoolWorker-1] Spider opened [INFO/ForkPoolWorker-1] Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) [DEBUG/Process-1:1] Crawled (200) <GET downloading_file_url

我需要下载一个大文件(+/-100MB)。如果我将其作为脚本运行,则一切正常,但当它由芹菜任务执行时,我有以下日志:

[INFO/ForkPoolWorker-1] Spider opened
[INFO/ForkPoolWorker-1] Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
[DEBUG/Process-1:1] Crawled (200) <GET downloading_file_url> (referer: None)
[INFO/ForkPoolWorker-1] Task my_task[hash] succeeded in 3.369s: None
[INFO/ForkPoolWorker-1] Received SIGTERM, shutting down gracefully. Send again to force
[INFO/ForkPoolWorker-1]Spider已打开
[INFO/ForkPoolWorker-1]爬网0页(以0页/分钟的速度),刮取0项(以0项/分钟的速度)
[DEBUG/Process-1:1]已爬网(200)(引用者:无)
[INFO/ForkPoolWorker-1]任务我的任务[hash]在3.369s中成功:无
[INFO/ForkPoolWorker-1]收到SIGTERM,正在正常关闭。再次施压
我可以将此文件作为本地文件进行刮取,但我不想每次执行任务之前都下载此文件