Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/315.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 暂停并恢复作业在scrapy项目中不起作用_Python_Scrapy - Fatal编程技术网

Python 暂停并恢复作业在scrapy项目中不起作用

Python 暂停并恢复作业在scrapy项目中不起作用,python,scrapy,Python,Scrapy,我在一个需要认证的网站上下载图片。一切正常,我可以下载图片。 我需要的是暂停并恢复爬行器,以便在需要时抓取图像。 所以,我使用了scrapy手册中提到的任何东西来做如下操作。 运行spider时使用了下面提到的查询 scrapy crawl somespider -s JOBDIR=crawls/somespider-1 要中止引擎,请按CTRL+C。 要再次恢复,请使用相同的命令 但在恢复后,蜘蛛在几分钟内关闭,它不会从停止的地方恢复 已更新: class SampleSpider(Spid

我在一个需要认证的网站上下载图片。一切正常,我可以下载图片。 我需要的是暂停并恢复爬行器,以便在需要时抓取图像。 所以,我使用了scrapy手册中提到的任何东西来做如下操作。 运行spider时使用了下面提到的查询

scrapy crawl somespider -s JOBDIR=crawls/somespider-1
要中止引擎,请按CTRL+C。 要再次恢复,请使用相同的命令

但在恢复后,蜘蛛在几分钟内关闭,它不会从停止的地方恢复

已更新

class SampleSpider(Spider):
name = "sample project"
allowed_domains = ["xyz.com"]
start_urls = (
    'http://abcyz.com/',
    )

def parse(self, response):
    return FormRequest.from_response(response,
                                    formname='Loginform',
                                    formdata={'username': 'Name',
                                              'password': '****'},
                                    callback=self.after_login)

def after_login(self, response):
    # check login succeed before going on
    if "authentication error" in str(response.body).lower():
        print "I am error"
        return
    else:
        start_urls = ['..','..']
        for url in start_urls:
            yield Request(url=urls,callback=self.parse_phots,dont_filter=True)
def parse_photos(self,response):
     **downloading image here**
我做错了什么

这是我在暂停后运行spider时得到的日志

2014-05-13 15:40:31+0530 [scrapy] INFO: Scrapy 0.22.0 started (bot: sampleproject)
2014-05-13 15:40:31+0530 [scrapy] INFO: Optional features available: ssl, http11, boto, django
2014-05-13 15:40:31+0530 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'sampleproject.spiders', 'SPIDER_MODULES': ['sampleproject.spiders'], 'BOT_NAME': 'sampleproject'}
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled downloader middlewares: RedirectMiddleware, HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-13 15:40:31+0530 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2014-05-13 15:40:31+0530 [sample] INFO: Spider opened
2014-05-13 15:40:31+0530 [sample] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-13 15:40:31+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-13 15:40:31+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080

......................

2014-05-13 15:42:06+0530 [sample] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 141184,
     'downloader/request_count': 413,
     'downloader/request_method_count/GET': 412,
     'downloader/request_method_count/POST': 1,
     'downloader/response_bytes': 11213203,
     'downloader/response_count': 413,
     'downloader/response_status_count/200': 412,
     'downloader/response_status_count/404': 1,
     'file_count': 285,
     'file_status_count/downloaded': 285,
     'finish_reason': 'shutdown',
     'finish_time': datetime.datetime(2014, 5, 13, 10, 12, 6, 534088),
     'item_scraped_count': 125,
     'log_count/DEBUG': 826,
     'log_count/ERROR': 1,
     'log_count/INFO': 9,
     'log_count/WARNING': 219,
     'request_depth_max': 12,
     'response_received_count': 413,
     'scheduler/dequeued': 127,
     'scheduler/dequeued/disk': 127,
     'scheduler/enqueued': 403,
     'scheduler/enqueued/disk': 403,
     'start_time': datetime.datetime(2014, 5, 13, 10, 10, 31, 232618)}
2014-05-13 15:42:06+0530 [sample] INFO: Spider closed (shutdown)
恢复后,它停止并显示

INFO: Scrapy 0.22.0 started (bot: sampleproject)
2014-05-13 15:42:32+0530 [scrapy] INFO: Optional features available: ssl, http11, boto, django
2014-05-13 15:42:32+0530 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'sampleproject.spiders', 'SPIDER_MODULES': ['sampleproject.spiders'], 'BOT_NAME': 'sampleproject'}
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled downloader middlewares: RedirectMiddleware, HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-13 15:42:32+0530 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2014-05-13 15:42:32+0530 [sample] INFO: Spider opened
*2014-05-13 15:42:32+0530 [sample] INFO: Resuming crawl (276 requests scheduled)*
2014-05-13 15:42:32+0530 [sample] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-05-13 15:42:32+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-05-13 15:42:32+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080


2014-05-13 15:43:19+0530 [sample] INFO: Closing spider (finished)
2014-05-13 15:43:19+0530 [sample] INFO: Dumping Scrapy stats:
    {'downloader/exception_count': 3,
     'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 3,
     'downloader/request_bytes': 132365,
     'downloader/request_count': 281,
     'downloader/request_method_count/GET': 281,
     'downloader/response_bytes': 567884,
     'downloader/response_count': 278,
     'downloader/response_status_count/200': 278,
     'file_count': 1,
     'file_status_count/downloaded': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2014, 5, 13, 10, 13, 19, 554981),
     'item_scraped_count': 276,
     'log_count/DEBUG': 561,
     'log_count/ERROR': 1,
     'log_count/INFO': 8,
     'log_count/WARNING': 1,
     'request_depth_max': 1,
     'response_received_count': 278,
     'scheduler/dequeued': 277,
     'scheduler/dequeued/disk': 277,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/disk': 1,
     'start_time': datetime.datetime(2014, 5, 13, 10, 12, 32, 659276)}
2014-05-13 15:43:19+0530 [sample] INFO: Spider closed (finished)

由于您必须进行身份验证,我假设您恢复工作时Cookie已过期。参考:

当Cookie过期或身份验证失败时,请找出http状态代码,然后可以使用类似以下内容:

def parse(self, response):
    if response.status == 404 or response.status != 200:
        self.authenticate()
        # continue with scraping

希望这有帮助。

您可以运行以下命令,而不是您编写的命令:

scrapy crawl somespider --set JOBDIR=crawl1
要停止它,您必须运行control-C一次!然后等斯拉皮停下来。如果您运行control-C两次,它将无法正常工作

然后,要恢复搜索,请再次运行此命令:

scrapy crawl somespider --set JOBDIR=crawl1

我不这么认为。我正在检查登录后的身份验证错误,关于cookie过期,我在中止后立即尝试恢复。仍然出现相同的问题。我已将代码包含在问题中。请看一看look@user-我曾多次面对这个问题。我还没有找到解决办法。检查这个,我通常会得到失败的URL,将它们转储到pickle数据库中,并在再次启动爬虫程序时加载它们。我知道这不是一个解决方案,只是一个解决办法。爬网完成后,如何清理JOBDIR?