Python 2.7 我的爬行蜘蛛不遵守规则

Python 2.7 我的爬行蜘蛛不遵守规则,python-2.7,scrapy,scrapy-spider,Python 2.7,Scrapy,Scrapy Spider,在Scrapy v1.0.5中,我的spider无法正常工作: class MaddynessSpider(CrawlSpider): name = "maddyness" allowed_domains = ["www.maddyness.com"] start_urls = [ 'http://www.maddyness.com/finance/levee-de-fonds/' ] _extract_article_links = Rule( LinkExtractor(

在Scrapy v1.0.5中,我的spider无法正常工作:

class MaddynessSpider(CrawlSpider):
name = "maddyness"
allowed_domains = ["www.maddyness.com"]

start_urls = [
    'http://www.maddyness.com/finance/levee-de-fonds/'
]

_extract_article_links = Rule(
    LinkExtractor(
        allow=(
            r'http://www\.maddyness\.com/finance/levee-de-fonds/'
        ),
        restrict_xpaths=('//article[starts-with(@class,"post")]'),
    ),
    callback='parse_article',
)

_extract_pagination_links = Rule(
    LinkExtractor(
        allow=(
            r'http://www\.maddyness\.com/finance/levee-de-fonds/',
            r'http://www\.maddyness\.com/page/'
        ),
        restrict_xpaths=('//div[@class="pagination-wrapper"]'),
    )
)

rules = (
    _extract_article_links,
    _extract_pagination_links,
)

def _extract_date(self, url):
    match = re.match(r'\S+/\S+/\S+/(\S+/\S+/\S+)/\S+/', url)
    return match.group(1) if match else None

def _extract_slug(self, url):
    match = re.match(r'\S+/\S+/\S+/\S+/\S+/\S+/(\S+)/', url)
    return match.group(1) if match else None

"""
Parsing function after each page is scraped
"""
def parse_article(self, response):
    print("la")
    article = NewsItem()
    loader = BeautifulSoupItemLoader(item=article, response=response, from_encoding='cp1252')

    #loader.add_xpath('company_name', u'//meta[@property="article:tag"]/@content')

    return loader.load_item()
我从未访问过我的回调函数parse_文章,结果显示:

[Anaconda2] C:\dev\hubble\workspaces\python\batch\scripts\Crawler>scrapy crawl maddyness

2016-04-28 17:00:03 [scrapy] INFO: Scrapy 1.0.5 started (bot: Crawler)
2016-04-28 17:00:03 [scrapy] INFO: Optional features available: ssl, http11, boto
2016-04-28 17:00:03 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'Crawler.spiders', 'SPIDER_MODULES': ['Crawler.spiders'], 'BOT_NAME': 'Crawler'}
2016-04-28 17:00:04 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2016-04-28 17:00:04 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2016-04-28 17:00:04 [scrapy] INFO: Enabled spider middlewares:HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2016-04-28 17:00:04 [scrapy] INFO: Enabled item pipelines: ElasticsearchPipeline

2016-04-28 17:00:04 [scrapy] INFO: Spider opened
2016-04-28 17:00:04 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-04-28 17:00:04 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-04-28 17:00:04 [scrapy] DEBUG: Redirecting (301) to <GET https://www.maddyness.com/finance/levee-de-fonds/> from <GET http://www.maddyness.com/finance/levee-de-fonds/>
2016-04-28 17:00:04 [scrapy] DEBUG: Redirecting (301) to <GET https://www.maddyness.com/index.php?s=%23MaddyPitch> from <GET http://www.maddyness.com/index.php?s=%23MaddyPitch>
2016-04-28 17:00:04 [scrapy] DEBUG: Crawled (200) <GET https://www.maddyness.com/index.php?s=%23MaddyPitch> (referer: None)
2016-04-28 17:00:04 [scrapy] DEBUG: Crawled (200) <GET https://www.maddyness.com/finance/levee-de-fonds/> (referer: None)
2016-04-28 17:00:05 [scrapy] INFO: Closing spider (finished)Spider closed
2016-04-28 17:00:05 [scrapy] INFO: Dumping Scrapy stats:    {    
'downloader/request_bytes': 1080,
 'downloader/request_count': 4,
 'downloader/request_method_count/GET': 4,
 'downloader/response_bytes': 48223,
 'downloader/response_count': 4,
 'downloader/response_status_count/200': 2,
 'downloader/response_status_count/301': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 4, 28, 15, 0, 5, 123000),
'log_count/DEBUG': 5,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 4,
'scheduler/dequeued/memory': 4,
'scheduler/enqueued': 4,
'scheduler/enqueued/memory': 4,
'start_time': datetime.datetime(2016, 4, 28, 15, 0, 4, 590000)}
2016-04-28 17:00:05 [scrapy] INFO: Spider closed (finished)
[Anaconda2]C:\dev\hubble\workspace\python\batch\scripts\Crawler>scrapy crawl疯狂
2016-04-28 17:00:03[scrapy]信息:scrapy 1.0.5已启动(机器人:爬虫)
2016-04-28 17:00:03[scrapy]信息:可选功能:ssl、http11、boto
2016-04-28 17:00:03[scrapy]信息:覆盖的设置:{'NEWSPIDER_模块':'Crawler.SPIDER','SPIDER_模块':['Crawler.SPIDER'],'BOT_NAME':'Crawler'}
2016-04-28 17:00:04[scrapy]信息:启用的扩展:CloseSpider、TelnetConsole、LogStats、CoreStats、SpiderState
2016-04-28 17:00:04[scrapy]信息:启用的下载中间件:HttpAuthMiddleware、DownloadTimeoutMiddleware、UserAgentMiddleware、RetryMiddleware、DefaultHeadersMiddleware、MetaRefreshMiddleware、HttpCompressionMiddleware、RedirectMiddleware、Cookies Middleware、ChunkedTransferMiddleware、DownloadersStats
2016-04-28 17:00:04[scrapy]信息:启用的蜘蛛中间件:HttpErrorMiddleware、OffItemIDdleware、RefererMiddleware、UrlLengthMiddleware、DepthMiddleware
2016-04-28 17:00:04[scrapy]信息:启用的项目管道:ElasticsearchPipeline
2016-04-28 17:00:04[scrapy]信息:蜘蛛打开
2016-04-28 17:00:04[抓取]信息:抓取0页(0页/分钟),抓取0项(0项/分钟)
2016-04-28 17:00:04[scrapy]调试:Telnet控制台监听127.0.0.1:6023
2016-04-28 17:00:04[scrapy]调试:重定向(301)到
2016-04-28 17:00:04[scrapy]调试:重定向(301)到
2016-04-28 17:00:04[scrapy]调试:爬网(200)(参考:无)
2016-04-28 17:00:04[scrapy]调试:爬网(200)(参考:无)
2016-04-28 17:00:05[scrapy]信息:关闭卡盘(已完成)卡盘已关闭
2016-04-28 17:00:05[scrapy]信息:倾销scrapy统计数据:{
“下载程序/请求字节”:1080,
“下载程序/请求计数”:4,
“下载器/请求\方法\计数/获取”:4,
“downloader/response_字节”:48223,
“下载程序/响应计数”:4,
“下载程序/响应状态\计数/200”:2,
“下载程序/响应状态\计数/301”:2,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2016,4,28,15,0,5123000),
“日志计数/调试”:5,
“日志计数/信息”:7,
“响应\u已收到\u计数”:2,
“调度程序/出列”:4,
“调度程序/出列/内存”:4,
“调度程序/排队”:4,
“调度程序/排队/内存”:4,
“开始时间”:datetime.datetime(2016,4,28,15,0,459000)}
2016-04-28 17:00:05[scrapy]信息:蜘蛛网关闭(完成)
非常感谢您的帮助,我完全被卡住了。

只是您被从“http”重定向到“https”,所有后续文章链接现在都以
https
开始,而您的规则配置为仅提取
http
链接。修正如下:

_extract_article_links = Rule(
    LinkExtractor(
        allow=(
            r'https?://www\.maddyness\.com/finance/levee-de-fonds/'
        ),
        restrict_xpaths=('//article[starts-with(@class,"post")]'),
    ),
    callback='parse_article',
)

_extract_pagination_links = Rule(
    LinkExtractor(
        allow=(
            r'https?://www\.maddyness\.com/finance/levee-de-fonds/',
            r'https?://www\.maddyness\.com/page/'
        ),
        restrict_xpaths=('//div[@class="pagination-wrapper"]'),
    )
)
s?
此处将匹配
s
0或1次,使其同时适用于
http
https

,只是将您从“http”重定向到“https”,所有后续文章链接现在都以
https
开始,而您的规则配置为仅提取
http
链接。修正如下:

_extract_article_links = Rule(
    LinkExtractor(
        allow=(
            r'https?://www\.maddyness\.com/finance/levee-de-fonds/'
        ),
        restrict_xpaths=('//article[starts-with(@class,"post")]'),
    ),
    callback='parse_article',
)

_extract_pagination_links = Rule(
    LinkExtractor(
        allow=(
            r'https?://www\.maddyness\.com/finance/levee-de-fonds/',
            r'https?://www\.maddyness\.com/page/'
        ),
        restrict_xpaths=('//div[@class="pagination-wrapper"]'),
    )
)

s?
这里将匹配
s
0或1次,使其同时适用于
http
https

!!非常感谢!的确非常感谢!