Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/hibernate/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Web scraping 从python脚本调用Scrapy Spider?_Web Scraping_Web Crawler_Scrapy - Fatal编程技术网

Web scraping 从python脚本调用Scrapy Spider?

Web scraping 从python脚本调用Scrapy Spider?,web-scraping,web-crawler,scrapy,Web Scraping,Web Crawler,Scrapy,我已经创建了一个名为aqaq的蜘蛛 它位于文件名image.py中。 image.py的内容如下: from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.http import Request a=[] from aqaq.items import aqaq import os class aqaqspider(BaseSpider): name = "

我已经创建了一个名为aqaq的蜘蛛 它位于文件名image.py中。 image.py的内容如下:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
a=[]
from aqaq.items import aqaq
import os
class aqaqspider(BaseSpider):
    name = "aqaq"
    allowed_domains = ["aqaq.com"]
    start_urls = [
                        "http://www.aqaq.com/list/female/view-all?limit=all"
    ]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        sites=hxs.select('//ul[@class="list"]/li')
        for site in sites:
                name=site.select('a[@class="product-name"]/@href').extract()
                a.append(name)
        f=open("url","w+")
        for i in a:
                if str(i)=='[]':
                        pass;
                else:
                        f.write(str(i)[3:-2]+os.linesep)
                        yield Request(str(i)[3:-2].rstrip('\n'),callback=self.parsed)

        f.close()
    def parsed(self,response):
        hxs = HtmlXPathSelector(response)
        sites=hxs.select('//div[@class="form"]')
        items=[]
        for site in sites:
                item=aqaq()
                item['title']=site.select('h1/text()').extract()
                item['cost']=site.select('div[@class="price-container"]/span[@class="regular-price"]/span[@class="price"]/text()').extract()
                item['desc']=site.select('div[@class="row-block"]/p/text()').extract()
                item['color']=site.select('div[@id="colours"]/ul/li/a/img/@src').extract()
                items.append(item)
                return items
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from spiders.image import aqaqspider
from scrapy.xlib.pydispatch import dispatcher
def stop_reactor():
    reactor.stop()
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = aqaqspider(domain='aqaq.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()a
log.start(loglevel=log.DEBUG)
log.msg("------------>Running reactor")
result = reactor.run()
print result
log.msg("------------>Running stoped")
我正在尝试使用python脚本运行此spider,如下所示:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request
a=[]
from aqaq.items import aqaq
import os
class aqaqspider(BaseSpider):
    name = "aqaq"
    allowed_domains = ["aqaq.com"]
    start_urls = [
                        "http://www.aqaq.com/list/female/view-all?limit=all"
    ]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        sites=hxs.select('//ul[@class="list"]/li')
        for site in sites:
                name=site.select('a[@class="product-name"]/@href').extract()
                a.append(name)
        f=open("url","w+")
        for i in a:
                if str(i)=='[]':
                        pass;
                else:
                        f.write(str(i)[3:-2]+os.linesep)
                        yield Request(str(i)[3:-2].rstrip('\n'),callback=self.parsed)

        f.close()
    def parsed(self,response):
        hxs = HtmlXPathSelector(response)
        sites=hxs.select('//div[@class="form"]')
        items=[]
        for site in sites:
                item=aqaq()
                item['title']=site.select('h1/text()').extract()
                item['cost']=site.select('div[@class="price-container"]/span[@class="regular-price"]/span[@class="price"]/text()').extract()
                item['desc']=site.select('div[@class="row-block"]/p/text()').extract()
                item['color']=site.select('div[@id="colours"]/ul/li/a/img/@src').extract()
                items.append(item)
                return items
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy import log, signals
from spiders.image import aqaqspider
from scrapy.xlib.pydispatch import dispatcher
def stop_reactor():
    reactor.stop()
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = aqaqspider(domain='aqaq.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()a
log.start(loglevel=log.DEBUG)
log.msg("------------>Running reactor")
result = reactor.run()
print result
log.msg("------------>Running stoped")
运行上述脚本时,我遇到以下错误:

2013-09-27 19:21:06+0530 [aqaq] ERROR: Error downloading <GET http://www.aqaq.com/list/female/view-all?limit=all>: 'Settings' object has no attribute 'overrides'
2013-09-27 19:21:06+0530[aqaq]错误:下载时出错:“设置”对象没有属性“覆盖”

我是初学者,需要帮助吗?

您必须使用
爬虫设置
而不是
设置

更改此行:

    from scrapy.settings import Settings
    crawler = Crawler(Settings())
作者:

这一行:

    from scrapy.settings import Settings
    crawler = Crawler(Settings())
作者:


您必须使用
爬网设置
而不是
设置

更改此行:

    from scrapy.settings import Settings
    crawler = Crawler(Settings())
作者:

这一行:

    from scrapy.settings import Settings
    crawler = Crawler(Settings())
作者:


如果我的python脚本在另一个目录中呢?您还可以看看:我们可以使用scrapy downloader作为自定义python脚本的库吗?@KiranKyle您可以使用downloader组件,但据我所知,它与twisted绑定,因此您需要设置twisted reactor等等。如果我的python脚本位于另一个目录中,该怎么办?您还可以看看:我们可以使用scrapy downloader作为自定义python脚本中的库吗?@KiranKyle您可以使用downloader组件,但据我所知,它与twisted绑定,因此您需要设置twisted reactor等等。