Python 如何使用Privoxy和Tor来完成一个棘手的项目

Python 如何使用Privoxy和Tor来完成一个棘手的项目,python,proxy,scrapy,privoxy,Python,Proxy,Scrapy,Privoxy,我正试图从中获取信息,但目前我无法在浏览器中访问该网站,因为它说所有者禁止了我的IP地址(见下文) 我试图通过使用Privoxy和Tor来解决这个问题,类似于中所描述的 首先,我安装了一个started,它默认监听端口8118。我已将以下行添加到/etc/privoxy/config: forward-socks5 / 127.0.0.1:9050 . 我还运行了Tor,它正在端口9050上侦听,使用 kurt@kurt-ThinkPad:~$ netstat

我正试图从中获取信息,但目前我无法在浏览器中访问该网站,因为它说所有者禁止了我的IP地址(见下文)

我试图通过使用Privoxy和Tor来解决这个问题,类似于中所描述的

首先,我安装了一个started,它默认监听端口8118。我已将以下行添加到
/etc/privoxy/config

forward-socks5   /               127.0.0.1:9050 .
我还运行了Tor,它正在端口9050上侦听,使用

kurt@kurt-ThinkPad:~$ netstat -tulnp | grep 9050
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 127.0.0.1:9050          0.0.0.0:*               LISTEN      - 
据我所知,使用
wget
,它正在工作。例如,如果我
wget apkmiror.com
使用代理,我会得到一个响应:

kurt@kurt-ThinkPad:~$ wget www.apkmirror.com -e use_proxy=yes -e http_proxy=127.0.0.1:8118
--2017-04-24 11:02:32--  http://www.apkmirror.com/
Connecting to 127.0.0.1:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: 185097 (181K) [text/html]
Saving to: ‘index.html.2’

index.html.2        100%[===================>] 180,76K  --.-KB/s    in 0,004s  

2017-04-24 11:02:44 (42,7 MB/s) - ‘index.html.2’ saved [185097/185097]
2017-04-24 10:59:17 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: proxy_spider)
2017-04-24 10:59:17 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'proxy_spider.spiders', 'FEED_URI': 'test.json', 'SPIDER_MODULES': ['proxy_spider.spiders'], 'BOT_NAME': 'proxy_spider', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'json'}

2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.closespider.CloseSpider',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-04-24 10:59:18 [scrapy.core.engine] INFO: Spider opened
2017-04-24 10:59:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-24 10:59:18 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2017-04-24 10:59:18 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.apkmirror.com/robots.txt> (referer: None)
2017-04-24 10:59:18 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.apkmirror.com/sitemap_index.xml> (referer: None)
2017-04-24 10:59:18 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <403 https://www.apkmirror.com/sitemap_index.xml>: HTTP status code is not handled or not allowed
2017-04-24 10:59:18 [scrapy.core.engine] INFO: Closing spider (finished)
2017-04-24 10:59:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 519,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 3110,
 'downloader/response_count': 2,
 'downloader/response_status_count/403': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 4, 24, 8, 59, 18, 927878),
 'log_count/DEBUG': 3,
 'log_count/INFO': 8,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 4, 24, 8, 59, 18, 489419)}
2017-04-24 10:59:18 [scrapy.core.engine] INFO: Spider closed (finished)
然而,如果没有代理,我会得到
错误403:禁止

kurt@kurt-ThinkPad:~$ wget www.apkmirror.com
--2017-04-24 11:01:24--  http://www.apkmirror.com/
Resolving www.apkmirror.com (www.apkmirror.com)... 104.19.134.58, 104.19.136.58, 104.19.133.58, ...
Connecting to www.apkmirror.com (www.apkmirror.com)|104.19.134.58|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2017-04-24 11:01:24 ERROR 403: Forbidden.
现在来看Python代码。我编写了以下(简化的)spider:

我还将以下行添加到
settings.py

import os
os.environ['http_proxy'] = "http://localhost:8118"

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 1,
}
从我的理解来看,如果我设置
http\u proxy
环境变量,那么
HttpProxyMiddleware
应该可以工作。但是,如果我尝试使用命令刮取

scrapy crawl tor-spider -o test.json
我得到以下回应:

kurt@kurt-ThinkPad:~$ wget www.apkmirror.com -e use_proxy=yes -e http_proxy=127.0.0.1:8118
--2017-04-24 11:02:32--  http://www.apkmirror.com/
Connecting to 127.0.0.1:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: 185097 (181K) [text/html]
Saving to: ‘index.html.2’

index.html.2        100%[===================>] 180,76K  --.-KB/s    in 0,004s  

2017-04-24 11:02:44 (42,7 MB/s) - ‘index.html.2’ saved [185097/185097]
2017-04-24 10:59:17 [scrapy.utils.log] INFO: Scrapy 1.3.3 started (bot: proxy_spider)
2017-04-24 10:59:17 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'proxy_spider.spiders', 'FEED_URI': 'test.json', 'SPIDER_MODULES': ['proxy_spider.spiders'], 'BOT_NAME': 'proxy_spider', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'json'}

2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.closespider.CloseSpider',
 'scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-04-24 10:59:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-04-24 10:59:18 [scrapy.core.engine] INFO: Spider opened
2017-04-24 10:59:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-04-24 10:59:18 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2017-04-24 10:59:18 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.apkmirror.com/robots.txt> (referer: None)
2017-04-24 10:59:18 [scrapy.core.engine] DEBUG: Crawled (403) <GET https://www.apkmirror.com/sitemap_index.xml> (referer: None)
2017-04-24 10:59:18 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <403 https://www.apkmirror.com/sitemap_index.xml>: HTTP status code is not handled or not allowed
2017-04-24 10:59:18 [scrapy.core.engine] INFO: Closing spider (finished)
2017-04-24 10:59:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 519,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 3110,
 'downloader/response_count': 2,
 'downloader/response_status_count/403': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 4, 24, 8, 59, 18, 927878),
 'log_count/DEBUG': 3,
 'log_count/INFO': 8,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 4, 24, 8, 59, 18, 489419)}
2017-04-24 10:59:18 [scrapy.core.engine] INFO: Spider closed (finished)
2017-04-24 10:59:17[scrapy.utils.log]信息:scrapy 1.3.3已启动(机器人:代理_spider)
2017-04-24 10:59:17[scrapy.utils.log]信息:覆盖的设置:{'NEWSPIDER_MODULE':'proxy_spider.spider','FEED_URI':'test.json','spider_MODULES':['proxy_spider.spider'],'BOT_NAME':'proxy_spider','ROBOTSTXT_-obe':True,'FEED_-FORMAT':'json'}
2017-04-24 10:59:18[scrapy.middleware]信息:启用的扩展:
['scrapy.extensions.closespider.closespider',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.logstats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.corestats']
2017-04-24 10:59:18[剪贴簿中间件]信息:启用的下载程序中间件:
['scrapy.downloadermiddleware.httpproxy.HttpProxyMiddleware',
'scrapy.downloaderMiddleware.robotstxt.RobotsTxtMiddleware',
'scrapy.downloaderMiddleware.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloaderMiddleware.defaultheaders.DefaultHeadersMiddleware',
'scrapy.DownloaderMiddleware.useragent.UserAgentMiddleware',
'scrapy.DownloaderMiddleware.retry.RetryMiddleware',
'scrapy.DownloaderMiddleware.redirect.MetaRefreshMiddleware',
'scrapy.DownloaderMiddleware.httpcompression.HttpCompressionMiddleware',
'scrapy.DownloaderMiddleware.redirect.RedirectMiddleware',
“scrapy.DownloaderMiddleware.cookies.CookiesMiddleware”,
'scrapy.downloadermiddleware.stats.DownloaderStats']
2017-04-24 10:59:18[剪贴簿中间件]信息:启用的蜘蛛中间件:
['scrapy.spidermiddleware.httperror.httperror中间件',
'刮皮.SpiderMiddleware.场外.场外Iddleware',
“scrapy.Spidermiddleware.referer.RefererMiddleware”,
'scrapy.spiderMiddleware.urllength.UrlLengthMiddleware',
'scrapy.spidermiddleware.depth.DepthMiddleware']
2017-04-24 10:59:18[scrapy.middleware]信息:启用的项目管道:
[]
2017-04-24 10:59:18[刮屑.核心.发动机]信息:十字轴已打开
2017-04-24 10:59:18[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2017-04-24 10:59:18[scrapy.extensions.telnet]调试:telnet控制台监听127.0.0.1:6024
2017-04-24 10:59:18[scrapy.core.engine]调试:爬网(403)(参考:无)
2017-04-24 10:59:18[scrapy.core.engine]调试:爬网(403)(参考:无)
2017-04-24 10:59:18[scrapy.spidermiddleware.httperror]信息:忽略响应:HTTP状态代码未处理或不允许
2017-04-24 10:59:18[刮屑芯发动机]信息:关闭卡盘(已完成)
2017-04-24 10:59:18[scrapy.statscollectors]信息:倾销scrapy统计数据:
{'downloader/request_bytes':519,
“下载程序/请求计数”:2,
“下载器/请求\方法\计数/获取”:2,
“下载程序/响应字节”:3110,
“下载程序/响应计数”:2,
“下载器/响应\状态\计数/403”:2,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2017,4,24,8,59,18,927878),
“日志计数/调试”:3,
“日志计数/信息”:8,
“响应\u已收到\u计数”:2,
“调度程序/出列”:1,
“调度程序/出列/内存”:1,
“调度程序/排队”:1,
“调度程序/排队/内存”:1,
“开始时间”:datetime.datetime(2017,4,24,8,59,18,489419)}
2017-04-24 10:59:18[刮屑堆芯发动机]信息:十字轴关闭(完成)

简言之,尽管我尝试使用Privoxy/Tor匿名刮取,但我仍然在刮取器中遇到
403
错误。我做错了什么吗?

AkpmError正在使用cloudflare来保护自己(以及其他东西)免受刮擦和机器人的攻击

很可能他们已经将scrapy的标准用户代理列入黑名单。因此,除了使用tor IP(顺便说一句,tor IP也很容易被列入黑名单)之外,您还应该设置一个看起来像真实浏览器的用户代理标头:

在settings.py中

USER_AGENT = "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0"
(详情请参阅)