Python Scrapy-NameError:全局名称';记录器';没有定义
我试图通过修改中间件来稍微修改Scrapy retry。我使用这个中间件:Python Scrapy-NameError:全局名称';记录器';没有定义,python,web-scraping,scrapy,screen-scraping,scrapy-middleware,Python,Web Scraping,Scrapy,Screen Scraping,Scrapy Middleware,我试图通过修改中间件来稍微修改Scrapy retry。我使用这个中间件: class Retry500Middleware(RetryMiddleware): def _retry(self, request, reason, spider): retries = request.meta.get('retry_times', 0) + 1 if retries <= self.max_retry_times: logg
class Retry500Middleware(RetryMiddleware):
def _retry(self, request, reason, spider):
retries = request.meta.get('retry_times', 0) + 1
if retries <= self.max_retry_times:
logger.debug("Retrying %(request)s (failed %(retries)d times): %(reason)s",
{'request': request, 'retries': retries, 'reason': reason},
extra={'spider': spider})
retryreq = request.copy()
retryreq.meta['retry_times'] = retries
retryreq.meta['download_timeout'] = 600
retryreq.dont_filter = True
retryreq.priority = request.priority + self.priority_adjust
return retryreq
else:
logger.error("Gave up retrying %(request)s (failed %(retries)d times): %(reason)s",
{'request': request, 'retries': retries, 'reason': reason},
extra={'spider': spider})
类Retry500Middleware(RetryMiddleware):
def_重试(自身、请求、原因、爬行器):
retries=request.meta.get('retry_times',0)+1
如果最后重试,我将使用此代码
import logging
logging.log(logging.ERROR, "Gave up retrying %(request)s (failed %(retries)d times): %(reason)s",
{'request': request, 'retries': retries, 'reason': reason},
extra={'spider': spider})
您可以将self.logger=logging.getLogger(name)放入泛型函数中。init(),也可以在导入日志后定义全局记录器。请参阅以下答案:
import logging
logging.log(logging.ERROR, "Gave up retrying %(request)s (failed %(retries)d times): %(reason)s",
{'request': request, 'retries': retries, 'reason': reason},
extra={'spider': spider})
import logging
logger = logging.getLogger(__name__)