Python 无法使用Scrapy提取数据

Python 无法使用Scrapy提取数据,python,scrapy,Python,Scrapy,我试图提取NSE(印度)公开前的股票市场数据。我能够从scrapy shell中获取数据,但当我将其作为文件运行或作为整个代码在pycharm中运行时,我没有得到任何输出。我的代码是 class PreopenMarketDataSpider(scrapy.Spider): name = 'preopen_market_data' allowed_domains = ['www1.nseindia.com'] start_urls = ['https://www1.ns

我试图提取NSE(印度)公开前的股票市场数据。我能够从scrapy shell中获取数据,但当我将其作为文件运行或作为整个代码在pycharm中运行时,我没有得到任何输出。我的代码是

class PreopenMarketDataSpider(scrapy.Spider):

    name = 'preopen_market_data'
    allowed_domains = ['www1.nseindia.com']
    start_urls = ['https://www1.nseindia.com/live_market/dynaContent/live_watch/pre_open_market/pre_open_market.htm']

    def parse(self, response):
        stocks = ['RELIANCE', 'TATASTEEL', 'LT']
        for stock in stocks:
            stock_url  = 'https://www1.nseindia.com/live_market/dynaContent/live_analysis/pre_open/preOpenOrderBook.jsp?param='+str('stock')+'EQN&symbol='+str('stock')
            yield Request(stock_url, callback=self.data)
    def data(self,response):
        p=response.xpath('//*[@class="orderBookFontCBig"]/text()').extract()
        yield Request(p,callback=self,meta={'Stock':p})

为什么它不获取数据。我做错了什么?我们可以通过formRequest方法执行此操作吗?

在最后一行中,只需打印p即可获得输出数据

class PreopenMarketDataSpider(scrapy.Spider):
    
        name = 'preopen_market_data'
        allowed_domains = ['www1.nseindia.com']
        start_urls = ['https://www1.nseindia.com/live_market/dynaContent/live_watch/pre_open_market/pre_open_market.htm']
    
        def parse(self, response):
            stocks = ['RELIANCE', 'TATASTEEL', 'LT']
            for stock in stocks:
                stock_url  = 'https://www1.nseindia.com/live_market/dynaContent/live_analysis/pre_open/preOpenOrderBook.jsp?param='+stock+'EQN&symbol='+stock
                yield Request(stock_url, callback=self.data)
    
        def data(self,response):
            p=response.xpath('//*[@class="orderBookFontCBig"]/text()').extract()
            print(p)
            # yield Request(p,callback=self,meta={'Stock':p}) 
print output 
['1975.00']
['526.00']

在最后一行中,只需打印p即可获得输出数据

class PreopenMarketDataSpider(scrapy.Spider):
    
        name = 'preopen_market_data'
        allowed_domains = ['www1.nseindia.com']
        start_urls = ['https://www1.nseindia.com/live_market/dynaContent/live_watch/pre_open_market/pre_open_market.htm']
    
        def parse(self, response):
            stocks = ['RELIANCE', 'TATASTEEL', 'LT']
            for stock in stocks:
                stock_url  = 'https://www1.nseindia.com/live_market/dynaContent/live_analysis/pre_open/preOpenOrderBook.jsp?param='+stock+'EQN&symbol='+stock
                yield Request(stock_url, callback=self.data)
    
        def data(self,response):
            p=response.xpath('//*[@class="orderBookFontCBig"]/text()').extract()
            print(p)
            # yield Request(p,callback=self,meta={'Stock':p}) 
print output 
['1975.00']
['526.00']
产生请求(p,callback=self,meta={'Stock':p})
。这不是你在scrapy上产生输出的方式。这是告诉它访问url 1975.00,如果成功,尝试调用self,这是一个毫无意义的指令。我有点惊讶它没有抛出某种错误

您要做的是创建一个项目,将股票价值放入其中,并在项目处产生该价值。

产生请求(p,callback=self,meta={'stock':p})
。这不是你在scrapy上产生输出的方式。这是告诉它访问url 1975.00,如果成功,尝试调用self,这是一个毫无意义的指令。我有点惊讶它没有抛出某种错误

您要做的是创建一个项目,将股票价值放入其中,并在项目中产生该价值