Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/329.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 不支持刮擦错误_Python_Scrapy - Fatal编程技术网

Python 不支持刮擦错误

Python 不支持刮擦错误,python,scrapy,Python,Scrapy,当我从该页的详细信息页面抓取数据时,出现错误刮擦。异常。不支持:我仍然可以通过少量页面获取数据,但当我增加页面数量时,刮擦运行但没有更多输出,它运行且无法停止。提前感谢 页面有图像,但我不想抓取图像,可能有响应内容不是文本 这是一个错误 2017-02-18 15:35:35 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.google.com.my:443/maps/

当我从该页的详细信息页面抓取数据时,出现错误刮擦。异常。不支持:我仍然可以通过少量页面获取数据,但当我增加页面数量时,刮擦运行但没有更多输出,它运行且无法停止。提前感谢

页面有图像,但我不想抓取图像,可能有响应内容不是文本

这是一个错误

2017-02-18 15:35:35 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.google.com.my:443/maps/place/bs+bio+science+sdn+bhd/@4.109495,109.101269,2856256m/data=!3m1!4b1!4m2!3m1!1s0x0:0xb11eb29219c723f4?source=s_q&hl=en> from <GET http://maps.google.com.my/maps?f=q&source=s_q&hl=en&q=bs+bio+science+sdn+bhd&vps=1&jsv=171b&sll=4.109495,109.101269&sspn=25.686885,46.318359&ie=UTF8&ei=jPeISu6RGI7kugOboeXiDg&cd=1&usq=bs+bio+science+sdn+bhd&geocode=FQdNLwAdEm4QBg&cid=12762834734582014964&li=lmd>
2017-02-18 15:35:37 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://com> (failed 3 times): DNS lookup failed: address 'com' not found: [Errno 11001] getaddrinfo failed.
2017-02-18 15:35:37 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://www.byunature> (failed 3 times): DNS lookup failed: address 'www.byunature' not found: [Errno 11001] getaddrinfo failed.
2017-02-18 15:35:37 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://www.borneococonutoil.com> (failed 3 times): DNS lookup failed: address 'www.borneococonutoil.com' not found: [Errno 11001] getaddrinfo failed.
2017-02-18 15:35:37 [scrapy.core.scraper] ERROR: Error downloading <GET http://com>: DNS lookup failed: address 'com' not found: [Errno 11001] getaddrinfo failed.
2017-02-18 15:35:37 [scrapy.core.scraper] ERROR: Error downloading <GET http://www.byunature>: DNS lookup failed: address 'www.byunature' not found: [Errno 11001] getaddrinfo failed.
2017-02-18 15:35:37 [scrapy.core.scraper] ERROR: Error downloading <GET http://www.borneococonutoil.com>: DNS lookup failed: address 'www.borneococonutoil.com' not found: [Errno 11001] getaddrinfo failed.
2017-02-18 15:35:37 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.google.com.my/maps/place/bs+bio+science+sdn+bhd/@4.109495,109.101269,2856256m/data=!3m1!4b1!4m2!3m1!1s0x0:0xb11eb29219c723f4?source=s_q&hl=en&dg=dbrw&newdg=1> from <GET https://www.google.com.my:443/maps/place/bs+bio+science+sdn+bhd/@4.109495,109.101269,2856256m/data=!3m1!4b1!4m2!3m1!1s0x0:0xb11eb29219c723f4?source=s_q&hl=en>
2017-02-18 15:35:38 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.google.com.my/maps/place/bs+bio+science+sdn+bhd/@4.109495,109.101269,2856256m/data=!3m1!4b1!4m2!3m1!1s0x0:0xb11eb29219c723f4?source=s_q&hl=en&dg=dbrw&newdg=1> (referer: http://www.bsbioscience.com/contactus.html)
2017-02-18 15:35:41 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.canaanalpha.com/extras/Anistrike_Poster.pdf> (referer: http://www.canaanalpha.com/anistrike.html)
2017-02-18 15:35:41 [scrapy.core.scraper] ERROR: Spider error processing <GET http://www.canaanalpha.com/extras/Anistrike_Poster.pdf> (referer: http://www.canaanalpha.com/anistrike.html)
Traceback (most recent call last):
  File "c:\python27\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
    yield next(it)
  File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in process_spider_output
    for x in result:
  File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 22, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "c:\python27\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "D:\Scrapy\tutorial\tutorial\spiders\tu2.py", line 17, in parse
    company = response.css('font:nth-child(3)::text').extract_first()
  File "c:\python27\lib\site-packages\scrapy\http\response\__init__.py", line 97, in css
    raise NotSupported("Response content isn't text")
NotSupported: Response content isn't text
2017-02-18 15:35:41 [scrapy.core.engine] INFO: Closing spider (finished)
2017-02-18 15:35:41 [scrapy.extensions.feedexport] INFO: Stored json feed (30 items) in: tu2.json
2017-02-18 15:35:41 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 55,
 'downloader/exception_type_count/scrapy.exceptions.NotSupported': 31,
 'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 24,
默认情况下,LinkedExtractor会忽略许多非html文件,包括pdf-请参阅

对于您的代码示例,请尝试以下操作:

# detail page
count = 0;
link_extractor = LinkExtractor(restrict_css='td td a::attr(href)')
urls = link_extractor.extract_links(response)
for detail_page_url in urls:
    url = response.urljoin(detail_page_url);
    yield scrapy.Request(url, callback=self.parse)

@Granitosarus谢谢,但我知道如何为它创建一个过滤器,在init.py中创建过滤器跟随你的链接?你的意思是我们可以删除它,不处理pdf链接吗?@RoShanShan是的,只是不处理pdf链接。在
#或
之后的第二个例子就是您真正需要的。看,我真的不知道把代码放在“#或”后面。我想从此链接的详细信息中提取数据:。您可以在上面看到我的代码。@RoShanShan请参见我的编辑,以了解如何改进代码以解决此问题。
[scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.canaanalpha.com/extras/Anistrike_Poster.pdf> (referer: http://www.canaanalpha.com/anistrike.html)
def parse(self, response):
    url = 'someurl'
    if '.pdf' not in url:
        yield Request(url, self.parse2)
    # or
    le = LinkExtractor()
    urls = le.extract_links(response)
    for url in urls:
        yield Request(url, self.parse2)
# detail page
count = 0;
link_extractor = LinkExtractor(restrict_css='td td a::attr(href)')
urls = link_extractor.extract_links(response)
for detail_page_url in urls:
    url = response.urljoin(detail_page_url);
    yield scrapy.Request(url, callback=self.parse)