Web crawler scrapy:定义爬虫设置

Web crawler scrapy:定义爬虫设置,web-crawler,scrapy,Web Crawler,Scrapy,我试图覆盖脚本中调用的爬虫程序的某些设置,但这些设置似乎不起作用: from scrapy import log from scrapy.crawler import CrawlerProcess from scrapy.utils.project import get_project_settings from someproject.spiders import SomeSpider spider = SomeSpider() overrides = { 'LOG_ENABLED'

我试图覆盖脚本中调用的爬虫程序的某些设置,但这些设置似乎不起作用:

from scrapy import log
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from someproject.spiders import SomeSpider

spider = SomeSpider()
overrides = {
    'LOG_ENABLED': True,
    'LOG_STDOUT': True,
}
settings = get_project_settings()
settings.overrides.update(overrides)
log.start()
crawler = CrawlerProcess(settings)
crawler.install()
crawler.configure()
crawler.crawl(spider)
crawler.start()
在蜘蛛中:

from scrapy.spider import BaseSpider

class SomeSpider(BaseSpider):

    def __init__(self):
        self.start_urls = [ 'http://somedomain.com' ]

    def parse(self, response):
        print 'some test' # won't print anything
        exit(0) # will normally exit failing the crawler
通过定义
LOG\u ENABLED
LOG\u STDOUT
,我希望看到日志中打印的“some test”字符串。此外,在我尝试过的一些其他设置中,我似乎无法将日志重定向到
log\u文件

我一定是做错了什么。。。
请帮忙=D

使用
log.msg('some test')
打印日志

启动爬虫程序后,您可能需要启动Twisted的反应器:

from twisted.internet import reactor
#...other imports...

#...setup crawler...
crawler.start()
reactor.run()
相关问题/更多代码: