Scrapy在第一页之后不会爬行

Scrapy在第一页之后不会爬行,scrapy,Scrapy,我这4天的问题已经到了死胡同了。我想爬。在每个工作列表页面(即),我进入每个工作链接并获得工作标题。到目前为止我一直在工作 现在,我正在尝试让爬行器转到下一个作业列表页面(例如从到和爬网所有作业)。我的爬行规则不起作用,我没有线索,什么地方出了问题,什么地方丢失了。请帮忙 from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtra

我这4天的问题已经到了死胡同了。我想爬。在每个工作列表页面(即),我进入每个工作链接并获得工作标题。到目前为止我一直在工作

现在,我正在尝试让爬行器转到下一个作业列表页面(例如从到和爬网所有作业)。我的爬行规则不起作用,我没有线索,什么地方出了问题,什么地方丢失了。请帮忙

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from craigslist_sample.items import CraigslistSampleItem

class LedcorSpider(CrawlSpider):
    name = "ledcor"
    allowed_domains = ["www.ledcor.com"]
    start_urls = ["http://www.ledcor.com/careers/search-careers"]


    rules = [
        Rule(SgmlLinkExtractor(allow=("http://www.ledcor.com/careers/search-careers\?page=\d",),restrict_xpaths=('//div[@class="pager"]/a',)), follow=True),
        Rule(SgmlLinkExtractor(allow=("http://www.ledcor.com/job\?(.*)",)),callback="parse_items")
    ]

def parse_items(self, response):
    hxs = HtmlXPathSelector(response)
    item = CraigslistSampleItem()
    item['title'] = hxs.select('//h1/text()').extract()[0].encode('utf-8')
    item['link'] = response.url
    return item
这是Items.py

from scrapy.item import Item, Field

class CraigslistSampleItem(Item):
    title = Field()
    link = Field()
    desc = Field()
这里是Pipelines.py

class CraigslistSamplePipeline(object):
    def process_item(self, item, spider):
        return item
更新:(@blender suggestion)它不会爬行

rules = [
    Rule(SgmlLinkExtractor(allow=(r"http://www.ledcor.com/careers/search-careers\?page=\d",),restrict_xpaths=('//div[@class="pager"]/a',)), follow=True),
    Rule(SgmlLinkExtractor(allow=("http://www.ledcor.com/job\?(.*)",)),callback="parse_items")
]

您需要转义问号,并为正则表达式使用原始字符串:

r"http://www\.ledcor\.com/careers/search-careers\?page=\d"
否则,它会查找类似于
…careerspage=2
…carerpage=3

的URL,请尝试以下操作:

rules = [Rule(SgmlLinkExtractor(), follow=True, callback="parse_items")]

此外,需要在
pipeline.py中进行适当的更改,并粘贴管道和项目代码

您的
restrict\u xpath
参数错误。移除它,它就会工作

$ scrapy shell http://www.ledcor.com/careers/search-careers

In [1]: from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

In [2]: lx = SgmlLinkExtractor(allow=("http://www.ledcor.com/careers/search-careers\?page=\d",),restrict_xpaths=('//div[@class="pager"]/a',))

In [3]: lx.extract_links(response)
Out[3]: []

In [4]: lx = SgmlLinkExtractor(allow=("http://www.ledcor.com/careers/search-careers\?page=\d",))

In [5]: lx.extract_links(response)
Out[5]: 
[Link(url='http://www.ledcor.com/careers/search-careers?page=1', text=u'', fragment='', nofollow=False),
 Link(url='http://www.ledcor.com/careers/search-careers?page=2', text=u'2', fragment='', nofollow=False),
 Link(url='http://www.ledcor.com/careers/search-careers?page=3', text=u'3', fragment='', nofollow=False),
 Link(url='http://www.ledcor.com/careers/search-careers?page=4', text=u'4', fragment='', nofollow=False),
 Link(url='http://www.ledcor.com/careers/search-careers?page=5', text=u'5', fragment='', nofollow=False),
 Link(url='http://www.ledcor.com/careers/search-careers?page=6', text=u'6', fragment='', nofollow=False),
 Link(url='http://www.ledcor.com/careers/search-careers?page=7', text=u'7', fragment='', nofollow=False),
 Link(url='http://www.ledcor.com/careers/search-careers?page=8', text=u'8', fragment='', nofollow=False),
 Link(url='http://www.ledcor.com/careers/search-careers?page=9', text=u'9', fragment='', nofollow=False),
 Link(url='http://www.ledcor.com/careers/search-careers?page=10', text=u'10', fragment='', nofollow=False)]

我的蜘蛛用你的规则爬行所有的链接。我不想那样。我试过你的建议,但我的蜘蛛在开始url后不会爬行