Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/google-apps-script/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Can';不能正确地发射弹壳_Python_Python 3.x_Web Scraping_Scrapy - Fatal编程技术网

Python Can';不能正确地发射弹壳

Python Can';不能正确地发射弹壳,python,python-3.x,web-scraping,scrapy,Python,Python 3.x,Web Scraping,Scrapy,我正试图从一开始就造一只刮痒的蜘蛛。我设法使用spider的scrapy genspider name生成了一个spider,但是当我键入scrapy shell时,我收到了以下结果 请注意,当我使用我的另一个爬行器运行scrapy crawl spider_name时,该爬行器工作正常。然而,我也不能发射那个脏壳 (venv) jacquelinewong@Jacquelines-MBP rent_apt % scrapy shell 2020-05-29 09:29:12 [scrapy.u

我正试图从一开始就造一只刮痒的蜘蛛。我设法使用spider的scrapy genspider name生成了一个spider,但是当我键入scrapy shell时,我收到了以下结果

请注意,当我使用我的另一个爬行器运行
scrapy crawl spider_name
时,该爬行器工作正常。然而,我也不能发射那个脏壳

(venv) jacquelinewong@Jacquelines-MBP rent_apt % scrapy shell
2020-05-29 09:29:12 [scrapy.utils.log] INFO: Scrapy 2.0.1 started (bot: rent_apt)
2020-05-29 09:29:12 [scrapy.utils.log] INFO: Versions: lxml 4.2.1.0, libxml2 2.9.8, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 20.3.0, Python 3.6.5 |Anaconda, Inc.| (default, Apr 26 2018, 08:42:37) - [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)], pyOpenSSL 18.0.0 (OpenSSL 1.1.1g  21 Apr 2020), cryptography 2.9.2, Platform Darwin-19.3.0-x86_64-i386-64bit
2020-05-29 09:29:12 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2020-05-29 09:29:12 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'rent_apt',
 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter',
 'LOGSTATS_INTERVAL': 0,
 'NEWSPIDER_MODULE': 'rent_apt.spiders',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['rent_apt.spiders']}
2020-05-29 09:29:12 [scrapy.extensions.telnet] INFO: Telnet Password: eb3c5554d18c822b
2020-05-29 09:29:12 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage']
2020-05-29 09:29:12 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-05-29 09:29:12 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-05-29 09:29:12 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-05-29 09:29:12 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-05-29 09:29:13 [py.warnings] WARNING: /Users/jacquelinewong/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py:763: UserWarning: Attempting to work in a virtualenv. If you encounter problems, please install IPython inside the virtualenv.
  warn("Attempting to work in a virtualenv. If you encounter problems, please "

[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x114358240>
[s]   item       {}
[s]   settings   <scrapy.settings.Settings object at 0x114354e80>
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects 
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser
In [1]: response

In [2]: 

请帮我调试这个

结果是我遇到了超时问题。根据,请求超时的原因有:

  • 服务器对您的IP地址进行了速率限制
  • 服务器只响应特定区域的IP地址
  • 服务器太忙或长时间处于非常重的负载下
  • 服务器只响应特定的用户代理
  • 只有当请求头中存在cookie时,服务器才会响应
  • 自己想想更多的理由
一旦我尝试获取另一个url,shell就会工作

In [1]: fetch('https://www.apartments.com/manhattan-ny/')
2020-05-29 09:53:28 [scrapy.core.engine] INFO: Spider opened
2020-05-29 09:56:28 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.apartments.com/robots.txt> (failed 1 times): User timeout caused connection failure: Getting https://www.apartments.com/robots.txt took longer than 180.0 seconds..
2020-05-29 09:59:28 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.apartments.com/robots.txt> (failed 2 times): User timeout caused connection failure: Getting https://www.apartments.com/robots.txt took longer than 180.0 seconds..
2020-05-29 10:02:28 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://www.apartments.com/robots.txt> (failed 3 times): User timeout caused connection failure: Getting https://www.apartments.com/robots.txt took longer than 180.0 seconds..
2020-05-29 10:02:28 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET https://www.apartments.com/robots.txt>: User timeout caused connection failure: Getting https://www.apartments.com/robots.txt took longer than 180.0 seconds..
[1]中的
:获取('https://www.apartments.com/manhattan-ny/')
2020-05-29 09:53:28[刮屑.堆芯.发动机]信息:蜘蛛网已打开
2020-05-29 09:56:28[scrapy.downloadermiddleware.retry]调试:重试(失败1次):用户超时导致连接失败:获取https://www.apartments.com/robots.txt 耗时超过180.0秒。。
2020-05-29 09:59:28[scrapy.downloadermiddleware.retry]调试:重试(失败2次):用户超时导致连接失败:获取https://www.apartments.com/robots.txt 耗时超过180.0秒。。
2020-05-29 10:02:28[scrapy.downloadermiddleware.retry]错误:放弃重试(失败3次):用户超时导致连接失败:获取https://www.apartments.com/robots.txt 耗时超过180.0秒。。
2020-05-29 10:02:28[scrapy.downloadermiddleware.robotstxt]错误:下载错误:用户超时导致连接失败:获取https://www.apartments.com/robots.txt 耗时超过180.0秒。。

结果是我遇到了超时问题。根据,请求超时的原因有:

  • 服务器对您的IP地址进行了速率限制
  • 服务器只响应特定区域的IP地址
  • 服务器太忙或长时间处于非常重的负载下
  • 服务器只响应特定的用户代理
  • 只有当请求头中存在cookie时,服务器才会响应
  • 自己想想更多的理由
一旦我尝试获取另一个url,shell就会工作

In [1]: fetch('https://www.apartments.com/manhattan-ny/')
2020-05-29 09:53:28 [scrapy.core.engine] INFO: Spider opened
2020-05-29 09:56:28 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.apartments.com/robots.txt> (failed 1 times): User timeout caused connection failure: Getting https://www.apartments.com/robots.txt took longer than 180.0 seconds..
2020-05-29 09:59:28 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.apartments.com/robots.txt> (failed 2 times): User timeout caused connection failure: Getting https://www.apartments.com/robots.txt took longer than 180.0 seconds..
2020-05-29 10:02:28 [scrapy.downloadermiddlewares.retry] ERROR: Gave up retrying <GET https://www.apartments.com/robots.txt> (failed 3 times): User timeout caused connection failure: Getting https://www.apartments.com/robots.txt took longer than 180.0 seconds..
2020-05-29 10:02:28 [scrapy.downloadermiddlewares.robotstxt] ERROR: Error downloading <GET https://www.apartments.com/robots.txt>: User timeout caused connection failure: Getting https://www.apartments.com/robots.txt took longer than 180.0 seconds..
[1]中的
:获取('https://www.apartments.com/manhattan-ny/')
2020-05-29 09:53:28[刮屑.堆芯.发动机]信息:蜘蛛网已打开
2020-05-29 09:56:28[scrapy.downloadermiddleware.retry]调试:重试(失败1次):用户超时导致连接失败:获取https://www.apartments.com/robots.txt 耗时超过180.0秒。。
2020-05-29 09:59:28[scrapy.downloadermiddleware.retry]调试:重试(失败2次):用户超时导致连接失败:获取https://www.apartments.com/robots.txt 耗时超过180.0秒。。
2020-05-29 10:02:28[scrapy.downloadermiddleware.retry]错误:放弃重试(失败3次):用户超时导致连接失败:获取https://www.apartments.com/robots.txt 耗时超过180.0秒。。
2020-05-29 10:02:28[scrapy.downloadermiddleware.robotstxt]错误:下载错误:用户超时导致连接失败:获取https://www.apartments.com/robots.txt 耗时超过180.0秒。。

scrapy shell命令需要URL请检查文档scrapy shell命令需要URL请检查文档