Web scraping 刮壳响应204

Web scraping 刮壳响应204,web-scraping,scrapy,Web Scraping,Scrapy,我试图解析一个特定的网站:www.bina.az/items/all。我想在构建功能齐全的spider之前对其进行测试。因此,我在terminal中键入scrapy shell bina.az/items/all,得到如下结果: 原因是云费保护。我知道如何在scrapy项目中绕过cloudfare,但我也需要使用scrapy shell。如何解决此问题?您可以从项目中运行scrapy shell 假设您有以下项目: cloudfare-spider env scrapy.cfg

我试图解析一个特定的网站:www.bina.az/items/all。我想在构建功能齐全的spider之前对其进行测试。因此,我在terminal中键入scrapy shell bina.az/items/all,得到如下结果:


原因是云费保护。我知道如何在scrapy项目中绕过cloudfare,但我也需要使用scrapy shell。如何解决此问题?

您可以从项目中运行
scrapy shell

假设您有以下项目:

cloudfare-spider
    env
    scrapy.cfg
    cloudfare-spider
         __init__.py
         items.py
         middlewares.py
         pipelines.py
         __pycache__
         settings.py
         __init__.py
首先转到您的项目:

cd cloudfare-project
如果没有虚拟环境,请创建一个:

virtualenv env
然后激活虚拟环境:

source env/bin/activate
然后,在虚拟环境中,您应该安装:

pip install scrapy scrapy_cloudflare_middleware 
然后尝试运行
scrapy shell

>> scrapy shell "https://bina.az/items/all"                                                   
2018-12-02 12:49:24 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: cloudfare)
2018-12-02 12:49:25 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.7.1 (default, Oct 22 2018, 10:41:28) - [GCC 8.2.1 20180831], pyOpenSSL 18.0.0 (OpenSSL 1.1.0j  20 Nov 2018), cryptography 2.4.2, Platform Linux-4.19.4-arch1-1-ARCH-x86_64-with-arch
2018-12-02 12:49:25 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'cloudfare', 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'EDITOR': 'vim', 'LOGSTATS_INTERVAL': 0, 'NEWSPIDER_MODULE': 'cloudfare.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['cloudfare.spiders'], 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36'}
2018-12-02 12:49:25 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage']
2018-12-02 12:49:25 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy_cloudflare_middleware.middlewares.CloudFlareMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-12-02 12:49:25 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-12-02 12:49:25 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-12-02 12:49:25 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-12-02 12:49:25 [scrapy.core.engine] INFO: Spider opened
2018-12-02 12:49:26 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://bina.az/robots.txt> (referer: None)
2018-12-02 12:49:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://bina.az/items/all> (referer: None)
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x7f31a4b652b0>
[s]   item       {}
[s]   request    <GET https://bina.az/items/all>
[s]   response   <200 https://bina.az/items/all>
[s]   settings   <scrapy.settings.Settings object at 0x7f31a4b65630>
[s]   spider     <DefaultSpider 'default' at 0x7f31a463bef0>
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects 
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser
In [1]: 
如您所见,在
[scrapy.middleware]
中有
'scrapy\u cloudflare\u middleware.middleware.cloudflaremidleware'

我还注意到,您需要将
USER\u AGENT
设置为它可以工作,这是我的
settings.py
文件:

BOT_NAME = 'cloudfare'

SPIDER_MODULES = ['cloudfare.spiders']
NEWSPIDER_MODULE = 'cloudfare.spiders'

ROBOTSTXT_OBEY = True

DOWNLOADER_MIDDLEWARES = {
    # The priority of 560 is important, because we want this middleware to kick in just before the scrapy built-in `RetryMiddleware`.
    'scrapy_cloudflare_middleware.middlewares.CloudFlareMiddleware': 560
}

USER_AGENT="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36"

您是否尝试过使用用户代理?您是否确实尝试过爬网(
scrapy crawl spider\u name
)?您是否使用此中间件:?是的,我也尝试过用户代理。多亏了cloudfare中间件,我解决了项目中的一个问题。不过,我也想使用scrapy shell,而cloudfare中间件无法处理这些。非常感谢你,我的朋友!我不知道如何在shell中使用项目设置。你帮了我,再次感谢你!
BOT_NAME = 'cloudfare'

SPIDER_MODULES = ['cloudfare.spiders']
NEWSPIDER_MODULE = 'cloudfare.spiders'

ROBOTSTXT_OBEY = True

DOWNLOADER_MIDDLEWARES = {
    # The priority of 560 is important, because we want this middleware to kick in just before the scrapy built-in `RetryMiddleware`.
    'scrapy_cloudflare_middleware.middlewares.CloudFlareMiddleware': 560
}

USER_AGENT="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36"