Logging 设置日志级别对scrapy没有任何影响
我正在使用CrawlerProcess运行scrapy爬虫程序,如下所示Logging 设置日志级别对scrapy没有任何影响,logging,scrapy,Logging,Scrapy,我正在使用CrawlerProcess运行scrapy爬虫程序,如下所示 logging.basicConfig(level=logging.INFO) l = logging.getLogger("crawl") try: p = CrawlerProcess(get_project_settings()) crawler = p.create_crawler('my_crawler') p.crawl(crawler) p.sta
logging.basicConfig(level=logging.INFO)
l = logging.getLogger("crawl")
try:
p = CrawlerProcess(get_project_settings())
crawler = p.create_crawler('my_crawler')
p.crawl(crawler)
p.start()
crawl_stats = crawler.stats.get_stats()
...using crawl_stats
except Exception:
l.exception("Failed to crawl")
my settings.py具有以下日志设置
LOG_ENABLED = True
LOG_LEVEL = 'WARNING'
运行爬虫程序时,scrapy正在控制台上打印大量调试消息。将日志级别设置为“警告”没有任何影响
环境:
Scrapy=2.5.0
Python=3.8
Debian我也有同样的问题。不知何故,在没有指定的情况下,无法自动获取项目设置
from path.to.your.settings import settings as st
logging.basicConfig(level=logging.INFO)
l = logging.getLogger("crawl")
try:
crawler_settings = Settings()
crawler_settings.setmodule(st)
p = CrawlerProcess(settings=crawler_settings)
crawler = p.create_crawler('my_crawler')
p.crawl(crawler)
p.start()
crawl_stats = crawler.stats.get_stats()
...using crawl_stats
except Exception:
l.exception("Failed to crawl")
它对我有效。不,它没有为我解决问题。