Python 刮痧+;飞溅:连接被拒绝

Python 刮痧+;飞溅:连接被拒绝,python,web-scraping,scrapy,splash-screen,scrapy-splash,Python,Web Scraping,Scrapy,Splash Screen,Scrapy Splash,我正在学习如何使用scrapy+splash。我已经在里面创建了一个带有虚拟环境的项目,现在我正在做这个教程: 我跑过splash: $ docker run -p 8050:8050 scrapinghub/splash process 1: D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/etc/machine-id": No such file or

我正在学习如何使用scrapy+splash。我已经在里面创建了一个带有虚拟环境的项目,现在我正在做这个教程:

我跑过splash:

$ docker run -p 8050:8050 scrapinghub/splash
process 1: D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/etc/machine-id": No such file or directory
See the manual page for dbus-uuidgen to correct this issue.
2017-01-12 10:48:03.341100 [events] {"path": "/render.html", "load": [0.07, 0.02, 0.0], "fds": 19, "client_ip": "172.17.0.1", "_id": 140690919672912, "method": "GET", "rendertime": 6.497595548629761, "active": 0, "qsize": 0, "maxrss": 83860, "args": {"uid": 140690919672912, "url": "http://www.examp\u200c\u200ble.com/"}, 
"timestamp": 1484218083, "status_code": 200, "user-agent": "curl/7.51.0"}
2017-01-12 10:48:03.343167 [-] "172.17.0.1" - - [12/Jan/2017:10:48:02 +0000] "GET /render.html?url=http%3A%2F%2Fwww.examp\xe2\x80\x8c\xe2\x80\x8ble.com%2F HTTP/1.1" 200 1262 "-" "curl/7.51.0"
这导致:

2017-01-12 09:18:50+0000 [-] Log opened.
2017-01-12 09:18:50.225754 [-] Splash version: 2.3
2017-01-12 09:18:50.227033 [-] Qt 5.5.1, PyQt 5.5.1, WebKit 538.1, sip 4.17, Twisted 16.1.1, Lua 5.2
2017-01-12 09:18:50.227201 [-] Python 3.4.3 (default, Nov 17 2016, 01:08:31) [GCC 4.8.4]
2017-01-12 09:18:50.227645 [-] Open files limit: 1048576
2017-01-12 09:18:50.227882 [-] Can't bump open files limit
2017-01-12 09:18:50.333978 [-] Xvfb is started: ['Xvfb', ':1', '-screen', '0', '1024x768x24']
2017-01-12 09:18:50.438528 [-] proxy profiles support is enabled, proxy profiles path: /etc/splash/proxy-profiles
2017-01-12 09:18:50.597573 [-] verbosity=1
2017-01-12 09:18:50.597747 [-] slots=50
2017-01-12 09:18:50.597820 [-] argument_cache_max_entries=500
2017-01-12 09:18:50.598696 [-] Web UI: enabled, Lua: enabled (sandbox: enabled)
2017-01-12 09:18:50.601924 [-] Site starting on 8050
2017-01-12 09:18:50.602119 [-] Starting factory <twisted.web.server.Site object at 0x7ff528490be0>
一切正常;scrapy返回主体html。但是,当我尝试像这样从教程中请求时:

import scrapy
from scrapy_splash import SplashRequest
class MySpider(scrapy.Spider):
    name = 'spiderman'
    domain = ['web']
    start_urls = ['http://www.example.com']

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, self.parse,
                            args = {'wait':0.5},)

    def parse(self, response):
        response.body
我在终端中收到以下消息:

File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 61: Connection refused.
2017-01-12 11:02:50 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:03:06 [scrapy.downloadermiddlewares.retry] DEBUG: 
Retrying <GET http://192.168.59.103:8050/robots.txt> (failed 1 times): TCP connection timed out: 60: Operation timed out
但这没用

Q:有人知道如何解决这个问题吗

编辑:将
ROBOTSTXT_obe
更改为
False
无效。整个控制台日志:

$ scrapy crawl spiderman
2017-01-12 11:25:18 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: myScrapingProject)
2017-01-12 11:25:18 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'myScrapingProject', 'DOWNLOAD_DELAY': 0.25, 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage', 'NEWSPIDER_MODULE': 'myScrapingProject.spiders', 'SPIDER_MODULES': ['myScrapingProject.spiders'], 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20100101 Firefox/7.7'}
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy_splash.SplashCookiesMiddleware',
 'scrapy_splash.SplashMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy_splash.SplashDeduplicateArgsMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-01-12 11:25:18 [scrapy.core.engine] INFO: Spider opened
2017-01-12 11:25:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:25:18 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-12 11:26:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:26:33 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 1 times): TCP connection timed out: 60: Operation timed out.
2017-01-12 11:27:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:27:48 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 2 times): TCP connection timed out: 60: Operation timed out.
2017-01-12 11:28:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:29:03 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 3 times): TCP connection timed out: 60: Operation timed out.
2017-01-12 11:29:03 [scrapy.core.scraper] ERROR: Error downloading <GET http://www.example.com via http://192.168.59.103:8050/render.html>
Traceback (most recent call last):
  File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/twisted/internet/defer.py", line 1297, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.TCPTimedOutError: TCP connection timed out: 60: Operation timed out.
2017-01-12 11:29:03 [scrapy.core.engine] INFO: Closing spider (finished)
2017-01-12 11:29:03 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
 'downloader/exception_type_count/twisted.internet.error.TCPTimedOutError': 3,
 'downloader/request_bytes': 1746,
 'downloader/request_count': 3,
 'downloader/request_method_count/POST': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 1, 12, 10, 29, 3, 935527),
 'log_count/DEBUG': 4,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'scheduler/dequeued': 4,
 'scheduler/dequeued/memory': 4,
 'scheduler/enqueued': 4,
 'scheduler/enqueued/memory': 4,
 'splash/render.html/request_count': 1,
 'start_time': datetime.datetime(2017, 1, 12, 10, 25, 18, 451764)}
2017-01-12 11:29:03 [scrapy.core.engine] INFO: Spider closed (finished)

问题是
SPLASH\u URL
必须指向本地运行的SPLASH实例,通常运行在
http://localhost:8050

而不是--
http://192.168.59.103:8050
出现在错误日志中:


问题是
SPLASH\u URL
必须指向本地运行的SPLASH实例,通常运行在
http://localhost:8050

而不是--
http://192.168.59.103:8050
出现在错误日志中:


您是否在
DOWNLOADER\u middleware
中的scrapy项目设置中安装了
scrapy\u splash.splash中间件?您还可以在scrapy
settings.py
中使用
ROBOTSTXT\u OBEY=False禁用robots.txt处理。您还可以通过在Yeah typo抱歉处打开web界面来检查Splash是否启动并运行。嗯,我如何安装scrapy_splash.splash中间件,我在自述文件中找不到。对不起。我想它已经安装了,因为我还收到了以下消息:
2017-01-12 11:25:18[scrapy.middleware]信息:启用了下载程序中间件:“scrapy\u splash.SplashCookiesMiddleware”,“scrapy\u splash.SplashMiddleware”,是的,splash已经启动并运行了!您是否尝试过使用禁用robots.txt处理?如果仍然不起作用,请将控制台日志粘贴到运行scrapy crawl的位置(所有日志,而不仅仅是以
重试
结束)。如果您在另一个控制台的Splash日志中看到任何内容,您也可以将其粘贴。@paultrmbrth不幸的是
ROBOTSTXT\u OBEY=FALSE
没有帮助。编辑;现在是整个控制台日志。那我就不知道了。你能用curl测试你的设置吗?类似于
curl的东西http://localhost:8050/render.html?url=http%3A%2F%2Fwww.example.com%2F
您是否在
DOWNLOADER\u middleware
中的scrapy项目设置中安装了
scrapy\u splash.splash中间件?您还可以在scrapy
settings.py
中使用
ROBOTSTXT\u OBEY=False禁用robots.txt处理。您还可以通过在Yeah typo抱歉处打开web界面来检查Splash是否启动并运行。嗯,我如何安装scrapy_splash.splash中间件,我在自述文件中找不到。对不起。我想它已经安装了,因为我还收到了以下消息:
2017-01-12 11:25:18[scrapy.middleware]信息:启用了下载程序中间件:“scrapy\u splash.SplashCookiesMiddleware”,“scrapy\u splash.SplashMiddleware”,是的,splash已经启动并运行了!您是否尝试过使用禁用robots.txt处理?如果仍然不起作用,请将控制台日志粘贴到运行scrapy crawl的位置(所有日志,而不仅仅是以
重试
结束)。如果您在另一个控制台的Splash日志中看到任何内容,您也可以将其粘贴。@paultrmbrth不幸的是
ROBOTSTXT\u OBEY=FALSE
没有帮助。编辑;现在是整个控制台日志。那我就不知道了。你能用curl测试你的设置吗?类似于
curl的东西http://localhost:8050/render.html?url=http%3A%2F%2Fwww.example.com%2F
将SPLASH\u URL更改为“”后,我仍然遇到问题!这个问题还有其他可能的原因吗?@user345,您可能想打开另一个关于StackOverflow的问题。您的SPLASH_URL中是否包含了“http://”?感谢您的回答,帮助解决了此问题。我想知道为什么Scraphy splash自述文件不清楚(或者应该是显而易见的)?在将splash_URL更改为“”后,我仍然会遇到问题!这个问题还有其他可能的原因吗?@user345,您可能想打开另一个关于StackOverflow的问题。您的SPLASH_URL中是否包含了“http://”?感谢您的回答,帮助解决了此问题。我想知道为什么scrapy splash自述文件对此并不清楚(或者应该是显而易见的)?
$ scrapy crawl spiderman
2017-01-12 11:25:18 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: myScrapingProject)
2017-01-12 11:25:18 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'myScrapingProject', 'DOWNLOAD_DELAY': 0.25, 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage', 'NEWSPIDER_MODULE': 'myScrapingProject.spiders', 'SPIDER_MODULES': ['myScrapingProject.spiders'], 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20100101 Firefox/7.7'}
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy_splash.SplashCookiesMiddleware',
 'scrapy_splash.SplashMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy_splash.SplashDeduplicateArgsMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-01-12 11:25:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-01-12 11:25:18 [scrapy.core.engine] INFO: Spider opened
2017-01-12 11:25:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:25:18 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-12 11:26:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:26:33 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 1 times): TCP connection timed out: 60: Operation timed out.
2017-01-12 11:27:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:27:48 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 2 times): TCP connection timed out: 60: Operation timed out.
2017-01-12 11:28:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-12 11:29:03 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 3 times): TCP connection timed out: 60: Operation timed out.
2017-01-12 11:29:03 [scrapy.core.scraper] ERROR: Error downloading <GET http://www.example.com via http://192.168.59.103:8050/render.html>
Traceback (most recent call last):
  File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/twisted/internet/defer.py", line 1297, in _inlineCallbacks
    result = result.throwExceptionIntoGenerator(g)
  File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
    return g.throw(self.type, self.value, self.tb)
  File "/Users/username/myVirtualEnvironment/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
    defer.returnValue((yield download_func(request=request,spider=spider)))
twisted.internet.error.TCPTimedOutError: TCP connection timed out: 60: Operation timed out.
2017-01-12 11:29:03 [scrapy.core.engine] INFO: Closing spider (finished)
2017-01-12 11:29:03 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 3,
 'downloader/exception_type_count/twisted.internet.error.TCPTimedOutError': 3,
 'downloader/request_bytes': 1746,
 'downloader/request_count': 3,
 'downloader/request_method_count/POST': 3,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 1, 12, 10, 29, 3, 935527),
 'log_count/DEBUG': 4,
 'log_count/ERROR': 1,
 'log_count/INFO': 10,
 'scheduler/dequeued': 4,
 'scheduler/dequeued/memory': 4,
 'scheduler/enqueued': 4,
 'scheduler/enqueued/memory': 4,
 'splash/render.html/request_count': 1,
 'start_time': datetime.datetime(2017, 1, 12, 10, 25, 18, 451764)}
2017-01-12 11:29:03 [scrapy.core.engine] INFO: Spider closed (finished)
process 1: D-Bus library appears to be incorrectly set up; failed to read machine uuid: Failed to open "/etc/machine-id": No such file or directory
See the manual page for dbus-uuidgen to correct this issue.
2017-01-12 10:48:03.341100 [events] {"path": "/render.html", "load": [0.07, 0.02, 0.0], "fds": 19, "client_ip": "172.17.0.1", "_id": 140690919672912, "method": "GET", "rendertime": 6.497595548629761, "active": 0, "qsize": 0, "maxrss": 83860, "args": {"uid": 140690919672912, "url": "http://www.examp\u200c\u200ble.com/"}, 
"timestamp": 1484218083, "status_code": 200, "user-agent": "curl/7.51.0"}
2017-01-12 10:48:03.343167 [-] "172.17.0.1" - - [12/Jan/2017:10:48:02 +0000] "GET /render.html?url=http%3A%2F%2Fwww.examp\xe2\x80\x8c\xe2\x80\x8ble.com%2F HTTP/1.1" 200 1262 "-" "curl/7.51.0"
Retrying <GET http://www.example.com via http://192.168.59.103:8050/render.html> (failed 1 times)
SPLASH_URL = 'http://localhost:8050'