Http Scrapy使用随机代理池来避免被禁止

Http Scrapy使用随机代理池来避免被禁止,http,https,proxy,scrapy,user-agent,Http,Https,Proxy,Scrapy,User Agent,我对scrapy很陌生(我的背景不是信息学)。我有一个网站,我不能访问我的本地ip,因为我被禁止,我可以访问它使用浏览器上的VPN服务。为了让我的蜘蛛能够爬行,我在这里建立了一个代理池。有了它,我的蜘蛛可以爬行和抓取物品,但我怀疑我是否每天都要更改代理池列表???如果我的问题很愚蠢,我很抱歉 这里是my settings.py: BOT_NAME = 'reviews' SPIDER_MODULES = ['reviews.spiders'] NEWSPIDER_MODULE = 'revie

我对scrapy很陌生(我的背景不是信息学)。我有一个网站,我不能访问我的本地ip,因为我被禁止,我可以访问它使用浏览器上的VPN服务。为了让我的蜘蛛能够爬行,我在这里建立了一个代理池。有了它,我的蜘蛛可以爬行和抓取物品,但我怀疑我是否每天都要更改代理池列表???如果我的问题很愚蠢,我很抱歉

这里是my settings.py:

BOT_NAME = 'reviews'

SPIDER_MODULES = ['reviews.spiders']
NEWSPIDER_MODULE = 'reviews.spiders'
DOWNLOAD_DELAY = 1
RANDOMIZE_DOWNLOAD_DELAY = True

DOWNLOADER_MIDDLEWARES = {
        'scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware':None, # to avoid the raise IOError, 'Not a gzipped file' exceptions.IOError: Not a gzipped file
        'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware' : None,
        'reviews.rotate_useragent.RotateUserAgentMiddleware' :400,
        'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110, 
        'reviews.middlewares.ProxyMiddleware': 100,
    }

PROXIES = [{'ip_port': '168.63.249.35:80', 'user_pass': ''},
           {'ip_port': '162.17.98.242:8888', 'user_pass': ''},
           {'ip_port': '70.168.108.216:80', 'user_pass': ''},
           {'ip_port': '45.64.136.154:8080', 'user_pass': ''},
           {'ip_port': '149.5.36.153:8080', 'user_pass': ''},
           {'ip_port': '185.12.7.74:8080', 'user_pass': ''},
           {'ip_port': '150.129.130.180:8080', 'user_pass': ''},
           {'ip_port': '185.22.9.145:8080', 'user_pass': ''},
           {'ip_port': '200.20.168.135:80', 'user_pass': ''},
           {'ip_port': '177.55.64.38:8080', 'user_pass': ''},]

# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'reviews (+http://www.yourdomain.com)'
这里是my Middleware.py:

import base64
import random
from settings import PROXIES

class ProxyMiddleware(object):
    def process_request(self, request, spider):
        proxy = random.choice(PROXIES)
        if proxy['user_pass'] is not None:
            request.meta['proxy'] = "http://%s" % proxy['ip_port']
            encoded_user_pass = base64.encodestring(proxy['user_pass'])
            request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass            
        else:
            request.meta['proxy'] = "http://%s" % proxy['ip_port']
另一个问题:如果我有一个https网站,我应该只为https提供一个代理池列表吗?然后是另一个接收https\u代理列表的函数类HttpsProxy中间件(对象)

my rotate_useragent.py:

import random
from scrapy.contrib.downloadermiddleware.useragent import UserAgentMiddleware

class RotateUserAgentMiddleware(UserAgentMiddleware):
    def __init__(self, user_agent=''):
        self.user_agent = user_agent

    def process_request(self, request, spider):
        ua = random.choice(self.user_agent_list)
        if ua:
            request.headers.setdefault('User-Agent', ua)

    #the default user_agent_list composes chrome,I E,firefox,Mozilla,opera,netscape
    #for more user agent strings,you can find it in http://www.useragentstring.com/pages/useragentstring.php
    user_agent_list = [\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"\
        "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",\
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",\
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",\
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",\
        "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",\
        "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",\
        "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",\
        "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
       ]
在settings.py中还有另外一个问题,也是最后一个问题(很抱歉,如果这又是一个愚蠢的问题),这里有一个已注释的默认部分#通过在用户代理上标识您自己(和您的网站)负责地爬网 #USER_AGENT='reviews(+)'我是否应该取消对它的注释并放置我的个人信息?还是就这样?我想有效地爬行,但关于良好的政策和良好的习惯,以避免可能的禁令问题…

我问这些都是因为我的蜘蛛开始抛出这样的错误

twisted.internet.error.TimeoutError: User timeout caused connection failure: Getting http://www.example.com/browse/?start=884 took longer than 180.0 seconds.

下载时出错:[] 及

下载时出错:TCP连接超时:110:连接超时。 非常感谢你的帮助和时间

  • 这个问题没有正确的答案。有些代理并不总是可用的,因此您必须时不时地检查它们。此外,如果每次刮取服务器时使用相同的代理,也可能会阻止其IP,但这取决于此服务器的安全机制
  • 是的,因为您不知道池中的所有代理是否都支持HTTPS。或者,您可以只拥有一个池,并向每个代理添加一个字段,指示其HTTPS支持
  • 在您的设置中,您正在禁用用户代理中间件:
    “scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware”:无
    。 “用户\u代理”设置将不起任何作用

  • 已经有一个库可以这样做

    请从那里下载。它还没有出现在pypi.org中,因此您无法使用
    pip
    easy\u install
    轻松安装它

    Error downloading <GET http://www.example.com/article/2883892/x-review.html>: [<twisted.python.failure.Failure <class 'twisted.internet.error.ConnectionLost'>>]
    
    Error downloading <GET http://www.example.com/browse/?start=6747>: TCP connection timed out: 110: Connection timed out.