Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/306.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 获得;“请求太多”;使用scrapy刮取特定网站时出错_Python_Web Scraping_Scrapy_Python Requests - Fatal编程技术网

Python 获得;“请求太多”;使用scrapy刮取特定网站时出错

Python 获得;“请求太多”;使用scrapy刮取特定网站时出错,python,web-scraping,scrapy,python-requests,Python,Web Scraping,Scrapy,Python Requests,我写了一个蜘蛛来从他那里获取细节。 每次我试图放弃,我都会得到一个回应 Too many requests, please try after some time or report this problem at contact@allevents.in 我也尝试过使用shell命令 scrapy shell 'http://allevents.in/new%20delhi/all' 但我还是得到了与response.body相同的响应。 我也尝试过其他类似的网站,效果很好。 此外,还可

我写了一个蜘蛛来从他那里获取细节。 每次我试图放弃,我都会得到一个回应

Too many requests, please try after some time or report this problem at contact@allevents.in
我也尝试过使用shell命令

 scrapy shell 'http://allevents.in/new%20delhi/all'
但我还是得到了与
response.body
相同的响应。 我也尝试过其他类似的网站,效果很好。 此外,还可以使用
请求
以及
urllib.urlopen()
获取上述url

这是我的
settings.py
文件

# -*- coding: utf-8 -*-

# Scrapy settings for tutorial project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'tutorial'

SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 1

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 5
# The download delay setting will honor only one of:
CONCURRENT_REQUESTS_PER_DOMAIN = 1
CONCURRENT_REQUESTS_PER_IP = 1

# Disable cookies (enabled by default)
COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False

# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
# }

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'tutorial.middlewares.TutorialSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
# #    'tutorial.middlewares.MyCustomDownloaderMiddleware': 543,
#      'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': None,
#      # 'tutorial.middlewares.ProxyMiddleware': 100,
# }

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'tutorial.pipelines.TutorialPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

我是学刮痧的初学者。请帮助

Scrapy使用多个并发请求(默认为8个)来废弃您指定的网站。似乎allevents.in不喜欢你打得太多

最有可能的解决方案是设置以下配置选项之一:

  • 每个域的并发请求(默认为8,尝试使用较小的数字)
  • 每个IP的并发请求(默认为0,如果设置为正数,则覆盖上一个请求)


或者,您也可以在
并发请求=1
中使用.

Hi try assignment,如果您看到它有效,则逐渐增加它,如果仍然收到相同的警告,请尝试设置更高的
下载延迟
使用
刮擦随机代理
,而不是应用自动油门,当你能以更高的速度实现目标时,限制爬虫程序是没有乐趣的-相信我,如果你使用几百个代理{more always better},他们永远不会知道你在哪里。

基本上,他们所有的事件都必须跟踪你的报废程序,作为预防措施,他们可能会阻止你源系统的ip地址。在您的终端,它的allevents服务没有任何问题,它已经禁用了您的爬行器。您可以更改您的IP,并检查是否可能。我已尝试使用不同的IP。但是仍然得到相同的输出@Maheshkariat还有更多的事情需要考虑,比如客户端类型、请求等,所以在这一点上我们真的不知道他们阻止的所有标准。我如何设置?我尝试过使用scrapy shell@yorah。您可以使用以下语法向scrapy shell传递命令行选项:scrapy shell-s CONCURRENT_REQUESTS_PER_DOMAIN='8'http://...Also,如果从项目路径启动scrapy shell,然后你的项目设置应该会自动被使用。我已经尝试了并发请求\u PER\u DOMAIN=1仍然是相同的错误基本上,网站告诉你:“不要对我太严厉”。通过将每个域的并发请求设置为1,您将连接数限制为1。您还可以尝试设置选项
DOWNLOAD\u DELAY=5
,以设置这些请求之间的5秒延迟(可以随意增加/减少此值以找到最佳值)。