Python Scrapy SGMLLinkedExtractor添加任意URL

Python Scrapy SGMLLinkedExtractor添加任意URL,python,scrapy,scrape,Python,Scrapy,Scrape,如何将url添加到SGMLLinkedExtractor?也就是说,如何添加任意url来运行回调 要详细说明,请以dirbot为例: parse_category仅访问与SGMLLinkedExtractor SGMLLinkedExtractor(allow='directory.google.com/[A-Z][A-zA-Z_uz/]+$)匹配的所有内容。使用BaseSpider而不是爬行蜘蛛,然后将add设置为start_请求或start_URL[] class MySpider(Base

如何将url添加到SGMLLinkedExtractor?也就是说,如何添加任意url来运行回调

要详细说明,请以dirbot为例:


parse_category仅访问与SGMLLinkedExtractor SGMLLinkedExtractor(allow='directory.google.com/[A-Z][A-zA-Z_uz/]+$)匹配的所有内容。

使用BaseSpider而不是爬行蜘蛛,然后将add设置为start_请求或start_URL[]

class MySpider(BaseSpider):
    name = "myspider"

    def start_requests(self):
        return [Request("https://www.example.com",
            callback=self.parse)]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        ...
将增强蜘蛛分类(爬行蜘蛛):

name = 'themenHub'
allowed_domains = ['themen.t-online.de']
start_urls = ["http://themen.t-online.de/themen-a-z/a"]
rules = [Rule(SgmlLinkExtractor(allow=['id_\d+']), 'parse_news')]