Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/security/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python scrapy不会刮取父URL_Python_Python 3.x_Scrapy - Fatal编程技术网

Python scrapy不会刮取父URL

Python scrapy不会刮取父URL,python,python-3.x,scrapy,Python,Python 3.x,Scrapy,我有一个html文件demo1.html,其代码为: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Title</title> </head> <body> “What is this obsession people have with books?

我有一个html文件demo1.html,其代码为:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>

“What is this obsession people have with books? They put them in their houses—like they’re trophies.
What do you need it for after you read it?” – Jerry

<a href="file:///path/to/demo2.html"></a>
</body>
</html>
我已经编写了一个spider,它将从html文件中刮取明文,并将其存储在文本文件basename.txt中,与每个url相关

我的蜘蛛代码:

当我运行spider时,我可以看到demo2.html被删除,文本:

“周二没有感觉。周一有感觉,周五有感觉,周日有感觉……”纽曼

存储在demo2.html.txt中,但my spider不会返回任何对emo1.html的响应,它是start_url列表中url的一部分

我希望创建一个包含以下文本的文件demo1.html.txt:

“人们对书的痴迷是什么?他们把书放在家里就像奖杯一样。 你看完后需要它干什么?——杰瑞

注意:我在settings.py中设置了DEPTH_LIMIT=1

废原木:

任何帮助都将不胜感激:

我将回调更改为解析\u start\u url并覆盖它

参考答案:

完成包含预期更改的代码:

from os.path import splitext
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from urllib.parse import urlparse
from os.path import basename
import lxml

FOLLOW = True

class CustomLinkExtractor(LinkExtractor):
    def __init__(self, *args, **kwargs):
        super(CustomLinkExtractor, self).__init__(*args, **kwargs)
        self.deny_extensions = [".zip", ".mp4", ".mp3"]  # ignore files with mentioned extensions

def get_plain_html(response_body):
    root = lxml.html.fromstring(response_body)
    lxml.etree.strip_elements(root, lxml.etree.Comment, "script", "head", "style")
    text = lxml.html.tostring(root, method="text", encoding='utf-8')
    return text

def get_file_name(url):
    parsed_url = urlparse(url)
    file_name = basename(parsed_url.path.strip('/')) if parsed_url.path.strip('/') else parsed_url.netloc
    return file_name



class WebScraper(CrawlSpider):
    name = "goblin"
    start_urls = [
        'file:///path/to/demo1.html'
    ]

    def __init__(self, *args, **kwargs):
        self.rules = (Rule(CustomLinkExtractor(), follow=FOLLOW, callback="parse_file"),)
        super(WebScraper, self).__init__(*args, **kwargs)

    def parse_start_url(self, response):
        return self.parse_file(response)

    def parse_file(self, response):
        try:
            file_name = get_file_name(response.url)
            if hasattr(response, "text"):
                file_name = '{0}.txt'.format(file_name)
                text = get_plain_html(response.body)
                file_path = './{0}'.format(file_name)
                with open(file_path, 'wb') as f_data:
                    f_data.write(text)
        except Exception as ex:
            self.logger.error(ex, exc_info=True)

日志显示demo1已爬网,但您说不是。它们还显示没有从两个URL中的任何一个刮取任何项目。这似乎与你的问题相矛盾。嗨,Gallaecio,我所做的就是阅读'parse_file'方法中的response.body。当我调试代码时,我可以看到对demo2.html调用了“parse_file”方法,但对demo1.html调用了“parse_file”。该日志是从调试会话中调用的,还是从单独的运行中调用的?Hi@Gallaecio,这些日志是从单独的运行中调用的,看起来我需要重写parse_start_url引用:。我仍在测试,因此无法确定。我必须覆盖parse_start_url引用:
from os.path import splitext
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from urllib.parse import urlparse
from os.path import basename
import lxml

FOLLOW = True

class CustomLinkExtractor(LinkExtractor):
    def __init__(self, *args, **kwargs):
        super(CustomLinkExtractor, self).__init__(*args, **kwargs)
        self.deny_extensions = [".zip", ".mp4", ".mp3"]  # ignore files with mentioned extensions

def get_plain_html(response_body):
    root = lxml.html.fromstring(response_body)
    lxml.etree.strip_elements(root, lxml.etree.Comment, "script", "head", "style")
    text = lxml.html.tostring(root, method="text", encoding='utf-8')
    return text

def get_file_name(url):
    parsed_url = urlparse(url)
    file_name = basename(parsed_url.path.strip('/')) if parsed_url.path.strip('/') else parsed_url.netloc
    return file_name



class WebScraper(CrawlSpider):
    name = "goblin"
    start_urls = [
        'file:///path/to/demo1.html'
    ]

    def __init__(self, *args, **kwargs):
        self.rules = (Rule(CustomLinkExtractor(), follow=FOLLOW, callback="parse_file"),)
        super(WebScraper, self).__init__(*args, **kwargs)

    def parse_file(self, response):
        try:
            file_name = get_file_name(response.url)
            if hasattr(response, "text"):
                file_name = '{0}.txt'.format(file_name)
                text = get_plain_html(response.body)
                file_path = './{0}'.format(file_name)
                with open(file_path, 'wb') as f_data:
                    f_data.write(text)
        except Exception as ex:
            self.logger.error(ex, exc_info=True)
2020-06-17 20:33:27 [scrapy.utils.log] INFO: Scrapy 2.1.0 started (bot: scrapy_project)
2020-06-17 20:33:27 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.21.0, Twisted 20.3.0, Python 3.7.5 (default, Nov  7 2019, 10:50:52) - [GCC 8.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1g  21 Apr 2020), cryptography 2.9.2, Platform Linux-...-Ubuntu-...
2020-06-17 20:33:27 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.epollreactor.EPollReactor
2020-06-17 20:33:27 [scrapy.crawler] INFO: Overridden settings:
{'AJAXCRAWL_ENABLED': True,
 'AUTOTHROTTLE_ENABLED': True,
 'BOT_NAME': 'scrapy_project',
 'CONCURRENT_REQUESTS': 30,
 'COOKIES_ENABLED': False,
 'DEPTH_LIMIT': 1,
 'DOWNLOAD_MAXSIZE': 5242880,
 'NEWSPIDER_MODULE': 'scrapy_project.spiders',
 'REACTOR_THREADPOOL_MAXSIZE': 20,
 'SPIDER_MODULES': ['scrapy_project.spiders']}
2020-06-17 20:33:27 [scrapy.extensions.telnet] INFO: Telnet Password: *******
2020-06-17 20:33:27 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.throttle.AutoThrottle']
2020-06-17 20:33:27 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy_project.middlewares.FilterResponses',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-06-17 20:33:27 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-06-17 20:33:27 [scrapy.core.engine] INFO: Spider opened
2020-06-17 20:33:27 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-06-17 20:33:27 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-06-17 20:33:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET file:///path/to/demo1.html> (referer: None)
2020-06-17 20:33:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET file:///path/to/demo2.html> (referer: None)
2020-06-17 20:33:33 [scrapy.core.engine] INFO: Closing spider (finished)
2020-06-17 20:33:33 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 556,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 646,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'elapsed_time_seconds': 6.091841,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 6, 17, 15, 3, 33, 427522),
 'log_count/DEBUG': 2,
 'log_count/INFO': 14,
 'memusage/max': 1757986816,
 'memusage/startup': 1757986816,
 'request_depth_max': 1,
 'response_received_count': 2,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2020, 6, 17, 15, 3, 27, 335681)}
2020-06-17 20:33:33 [scrapy.core.engine] INFO: Spider closed (finished)
Process finished with exit code 0
from os.path import splitext
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from urllib.parse import urlparse
from os.path import basename
import lxml

FOLLOW = True

class CustomLinkExtractor(LinkExtractor):
    def __init__(self, *args, **kwargs):
        super(CustomLinkExtractor, self).__init__(*args, **kwargs)
        self.deny_extensions = [".zip", ".mp4", ".mp3"]  # ignore files with mentioned extensions

def get_plain_html(response_body):
    root = lxml.html.fromstring(response_body)
    lxml.etree.strip_elements(root, lxml.etree.Comment, "script", "head", "style")
    text = lxml.html.tostring(root, method="text", encoding='utf-8')
    return text

def get_file_name(url):
    parsed_url = urlparse(url)
    file_name = basename(parsed_url.path.strip('/')) if parsed_url.path.strip('/') else parsed_url.netloc
    return file_name



class WebScraper(CrawlSpider):
    name = "goblin"
    start_urls = [
        'file:///path/to/demo1.html'
    ]

    def __init__(self, *args, **kwargs):
        self.rules = (Rule(CustomLinkExtractor(), follow=FOLLOW, callback="parse_file"),)
        super(WebScraper, self).__init__(*args, **kwargs)

    def parse_start_url(self, response):
        return self.parse_file(response)

    def parse_file(self, response):
        try:
            file_name = get_file_name(response.url)
            if hasattr(response, "text"):
                file_name = '{0}.txt'.format(file_name)
                text = get_plain_html(response.body)
                file_path = './{0}'.format(file_name)
                with open(file_path, 'wb') as f_data:
                    f_data.write(text)
        except Exception as ex:
            self.logger.error(ex, exc_info=True)