Python 列表中的字符串格式

Python 列表中的字符串格式,python,python-3.x,scrapy,scrapy-spider,Python,Python 3.x,Scrapy,Scrapy Spider,我正在开发一个web scraper,但在列表理解中使用字符串占位符时,我偶然发现了这种奇怪的行为(以下是我的Pycharm代码片段): 以下是错误: Traceback (most recent call last): File "anaconda3/envs/scraper/bin/scrapy", line 11, in <module> sys.exit(execute()) File "anaconda3/envs/scraper/lib/python3.6

我正在开发一个web scraper,但在列表理解中使用字符串占位符时,我偶然发现了这种奇怪的行为(以下是我的Pycharm代码片段):

以下是错误:

Traceback (most recent call last):
  File "anaconda3/envs/scraper/bin/scrapy", line 11, in <module>
    sys.exit(execute())
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/cmdline.py", line 148, in execute
    cmd.crawler_process = CrawlerProcess(settings)
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/crawler.py", line 243, in __init__
    super(CrawlerProcess, self).__init__(settings)
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/crawler.py", line 134, in __init__
    self.spider_loader = _get_spider_loader(settings)
  File "/anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/crawler.py", line 330, in _get_spider_loader
    return loader_cls.from_settings(settings.frozencopy())
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/spiderloader.py", line 61, in from_settings
    return cls(settings)
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/spiderloader.py", line 25, in __init__
    self._load_all_spiders()
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/spiderloader.py", line 47, in _load_all_spiders
    for module in walk_modules(name):
  File "anaconda3/envs/scraper/lib/python3.6/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
    submod = import_module(fullpath)
  File "anaconda3/envs/scraper/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "Programming/my_projects/web-scrapers/arms_transfers/arms_transfers/spiders/unroca.py", line 9, in <module>
    class UnrocaSpider(scrapy.Spider):
  File "Programming/my_projects/web-scrapers/arms_transfers/arms_transfers/spiders/unroca.py", line 19, in UnrocaSpider
    start_urls = [base_url.format(param_tuple[0], param_tuple[1]) for param_tuple in url_param_tuples]
  File "Programming/my_projects/web-scrapers/arms_transfers/arms_transfers/spiders/unroca.py", line 19, in <listcomp>
    start_urls = [base_url.format(param_tuple[0], param_tuple[1]) for param_tuple in url_param_tuples]
NameError: name 'base_url' is not defined
它的工作原理与我在Pycharm项目中所期望的一样:

 ['https://www.unroca.org/aruba/report/2010/',
 'https://www.unroca.org/aruba/report/2011/',
 'https://www.unroca.org/aruba/report/2012/',
 'https://www.unroca.org/aruba/report/2013/',
 'https://www.unroca.org/aruba/report/2014/',
 'https://www.unroca.org/aruba/report/2015/',
 'https://www.unroca.org/aruba/report/2016/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2010/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2011/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2012/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2013/',...]

Pycharm项目和Jupyter笔记本使用相同的conda环境和Python 3.6.3解释器。有人能提供关于行为差异原因的见解吗?

要回答我自己的问题,如果您需要为
scrapy.Spider
类生成自己的起始URL列表,您应该覆盖
scrapy.Spider.start\u请求(self)
。在我的情况下,这看起来像:

class UnrocaSpider(scrapy.Spider):
    name = 'unroca'
    allowed_domains = ['unroca.org']

    def start_requests(self):
        country_names = [country.official_name if hasattr(country, 'official_name')
                         else country.name for country in list(pycountry.countries)]
        country_names = [name.lower().replace(' ', '-') for name in country_names]

        base_url = 'https://www.unroca.org/{}/report/{}/'
        url_param_tuples = list(itertools.product(country_names, range(2010, 2017)))
        start_urls = [base_url.format(param_tuple[0], param_tuple[1]) for param_tuple in url_param_tuples]
        for url in start_urls:
            yield scrapy.Request(url, self.parse)

如果从PyCharm运行代码,这是IDE中的警告还是实际错误?如果是实际错误,请复制并粘贴到此处。PyCharms是否真的在抱怨
pycountry
丢失,而不是
base\u url
,但是扭曲的线条(如果您所指的是)位于错误的位置?我已经用从命令行运行spider时收到的错误更新了我的问题。该错误指的是“第19行”,但是你问题中的代码不是19行。这听起来可能有点迂腐,但我100%确定您所遇到的错误部分是因为您没有告诉我们的代码。请编辑以包含代码片段和运行该代码时出现的错误。顺便说一句,由于该错误,我怀疑您将代码放在了类中,而不是放在了类的方法中,这并没有达到预期效果。(我不确定您会期望什么,但它没有做任何合理的事情。)但当然我只是猜测,因为我看不到您的所有代码。只是更新了代码,以包括第19行之前的所有行。列出的错误与通过
scrapy crawl
运行命令相同。
 ['https://www.unroca.org/aruba/report/2010/',
 'https://www.unroca.org/aruba/report/2011/',
 'https://www.unroca.org/aruba/report/2012/',
 'https://www.unroca.org/aruba/report/2013/',
 'https://www.unroca.org/aruba/report/2014/',
 'https://www.unroca.org/aruba/report/2015/',
 'https://www.unroca.org/aruba/report/2016/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2010/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2011/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2012/',
 'https://www.unroca.org/islamic-republic-of-afghanistan/report/2013/',...]
class UnrocaSpider(scrapy.Spider):
    name = 'unroca'
    allowed_domains = ['unroca.org']

    def start_requests(self):
        country_names = [country.official_name if hasattr(country, 'official_name')
                         else country.name for country in list(pycountry.countries)]
        country_names = [name.lower().replace(' ', '-') for name in country_names]

        base_url = 'https://www.unroca.org/{}/report/{}/'
        url_param_tuples = list(itertools.product(country_names, range(2010, 2017)))
        start_urls = [base_url.format(param_tuple[0], param_tuple[1]) for param_tuple in url_param_tuples]
        for url in start_urls:
            yield scrapy.Request(url, self.parse)