Python 在scrapy中将文件名参数传递到csv导出的管道

Python 在scrapy中将文件名参数传递到csv导出的管道,python,web-scraping,scrapy,scrapy-spider,scrapy-pipeline,Python,Web Scraping,Scrapy,Scrapy Spider,Scrapy Pipeline,我需要scrapy从命令行获取一个参数(-a FILE_NAME=“stuff”),并将其应用于在pipelines.py文件中的CSVWriterPipeLine中创建的文件。(我使用pipeline.py的原因是内置的导出器在输出文件中重复数据和头。相同的代码,但在管道中写入修复了它。) 我尝试从scrapy.utils.project导入获取项目设置,如中所示 但是我无法从命令行更改文件名 我也尝试过实现页面上的@avaleske的解决方案,因为它专门解决了这个问题,但我不知道他所说的代

我需要scrapy从命令行获取一个参数(-a FILE_NAME=“stuff”),并将其应用于在pipelines.py文件中的CSVWriterPipeLine中创建的文件。(我使用pipeline.py的原因是内置的导出器在输出文件中重复数据和头。相同的代码,但在管道中写入修复了它。)

我尝试从scrapy.utils.project导入获取项目设置,如中所示

但是我无法从命令行更改文件名

我也尝试过实现页面上的@avaleske的解决方案,因为它专门解决了这个问题,但我不知道他所说的代码应该放在我的scrapy文件夹中的什么地方

帮忙

settings.py:

BOT_NAME = 'internal_links'

SPIDER_MODULES = ['internal_links.spiders']
NEWSPIDER_MODULE = 'internal_links.spiders'
CLOSESPIDER_PAGECOUNT = 100
ITEM_PIPELINES = ['internal_links.pipelines.CsvWriterPipeline']
# Crawl responsibly by identifying yourself (and your website) on the       user-agent
USER_AGENT = 'internal_links (+http://www.mycompany.com)'
FILE_NAME = "mytestfilename"
pipelines.py:

import csv

class CsvWriterPipeline(object):

    def __init__(self, file_name):
        header = ["URL"]
        self.file_name = file_name
        self.csvwriter = csv.writer(open(self.file_name, 'wb'))
        self.csvwriter.writerow(header)


    def process_item(self, item, internallinkspider):
        # build your row to export, then export the row
        row = [item['url']]
        self.csvwriter.writerow(row)
        return item
spider.py:

from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from internal_links.items import MyItem



class MySpider(CrawlSpider):
    name = 'internallinkspider'
    allowed_domains = ['angieslist.com']
    start_urls = ['http://www.angieslist.com']

    rules = (Rule(SgmlLinkExtractor(), callback='parse_url', follow=True), )

    def parse_url(self, response):
        item = MyItem()
        item['url'] = response.url

        return item
您可以使用“设置”概念和
-s
命令行参数:

scrapy crawl internallinkspider -s FILE_NAME="stuff"
然后,在管道中:

import csv

class CsvWriterPipeline(object):
    @classmethod
    def from_crawler(cls, crawler):
        settings = crawler.settings
        file_name = settings.get("FILE_NAME")
        return cls(file_name)

    def __init__(self, file_name):
        header = ["URL"]
        self.csvwriter = csv.writer(open(file_name, 'wb'))
        self.csvwriter.writerow(header)

    def process_item(self, item, internallinkspider):
        # build your row to export, then export the row
        row = [item['url']]
        self.csvwriter.writerow(row)
        return item

返回cls时出错(my_设置),NameError:未定义全局名称“my_设置”。很好。谢谢你的帮助。