Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/url/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
URL在浏览器或wget中工作正常,但在Python或cURL中却是空的_Python_Url_Curl_Python 3.x - Fatal编程技术网

URL在浏览器或wget中工作正常,但在Python或cURL中却是空的

URL在浏览器或wget中工作正常,但在Python或cURL中却是空的,python,url,curl,python-3.x,Python,Url,Curl,Python 3.x,当我尝试从Python打开(在我的浏览器中工作正常)时,我得到一个空响应: >>> import urllib.request >>> content = urllib.request.urlopen('http://www.comicbookdb.com/browse.php') >>> print(content.read()) b'' curl -v -H 'Connection: keep-alive' http://www.com

当我尝试从Python打开(在我的浏览器中工作正常)时,我得到一个空响应:

>>> import urllib.request
>>> content = urllib.request.urlopen('http://www.comicbookdb.com/browse.php')
>>> print(content.read())
b''
curl -v -H 'Connection: keep-alive' http://www.comicbookdb.com/browse.php
设置用户代理时也会发生同样的情况:

>>> opener = urllib.request.build_opener()
>>> opener.addheaders = [('User-agent', 'Mozilla/5.0')]
>>> content = opener.open('http://www.comicbookdb.com/browse.php')
>>> print(content.read())
b''
或者当我改用httplib2时:

>>> import httplib2
>>> h = httplib2.Http('.cache')
>>> response, content = h.request('http://www.comicbookdb.com/browse.php')
>>> print(content)
b''
>>> print(response)
{'cache-control': 'no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'content-location': 'http://www.comicbookdb.com/browse.php', 'expires': 'Thu, 19 Nov 1981 08:52:00 GMT', 'content-length': '0', 'set-cookie': 'PHPSESSID=590f5997a91712b7134c2cb3291304a8; path=/', 'date': 'Wed, 25 Dec 2013 15:12:30 GMT', 'server': 'Apache', 'pragma': 'no-cache', 'content-type': 'text/html', 'status': '200'}
或者当我尝试使用cURL下载它时:

C:\>curl -v http://www.comicbookdb.com/browse.php
* About to connect() to www.comicbookdb.com port 80
*   Trying 208.76.81.137... * connected
* Connected to www.comicbookdb.com (208.76.81.137) port 80
> GET /browse.php HTTP/1.1
User-Agent: curl/7.13.1 (i586-pc-mingw32msvc) libcurl/7.13.1 zlib/1.2.2
Host: www.comicbookdb.com
Pragma: no-cache
Accept: */*

< HTTP/1.1 200 OK
< Date: Wed, 25 Dec 2013 15:20:06 GMT
< Server: Apache
< Expires: Thu, 19 Nov 1981 08:52:00 GMT
< Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Pragma: no-cache
< Set-Cookie: PHPSESSID=0a46f2d390639da7eb223ad47380b394; path=/
< Content-Length: 0
< Content-Type: text/html
* Connection #0 to host www.comicbookdb.com left intact
* Closing connection #0
C:\>curl-vhttp://www.comicbookdb.com/browse.php
*即将连接()到www.comicbookdb.com端口80
*正在尝试208.76.81.137…*有联系的
*已连接到www.comicbookdb.com(208.76.81.137)端口80
>GET/browse.php HTTP/1.1
用户代理:curl/7.13.1(i586-pc-mingw32msvc)libcurl/7.13.1 zlib/1.2.2
主持人:www.comicbookdb.com
Pragma:没有缓存
接受:*/*
在浏览器中打开URL或使用Wget下载URL似乎可以正常工作,不过:

C:\>wget http://www.comicbookdb.com/browse.php
--16:16:26--  http://www.comicbookdb.com/browse.php
           => `browse.php'
Resolving www.comicbookdb.com... 208.76.81.137
Connecting to www.comicbookdb.com[208.76.81.137]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]

    [    <=>                              ] 40,687        48.75K/s

16:16:27 (48.75 KB/s) - `browse.php' saved [40687]
C:\>wgethttp://www.comicbookdb.com/browse.php
--16:16:26--  http://www.comicbookdb.com/browse.php
=>`browse.php'
正在解析www.comicbookdb.com。。。208.76.81.137
连接到www.comicbookdb.com[208.76.81.137]:80。。。有联系的。
HTTP请求已发送,正在等待响应。。。200行
长度:未指定[text/html]
[]40687 48.75K/s
16:16:27(48.75 KB/s)-'browse.php'已保存[40687]
从同一服务器下载不同的文件时也会执行以下操作:

>>> content = urllib.request.urlopen('http://www.comicbookdb.com/index.php')
>>> print(content.read(100))
b'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"\n\t\t"http://www.w3.org/TR/1999/REC-html'
>content=urllib.request.urlopen('http://www.comicbookdb.com/index.php')
>>>打印(内容读取(100))

b'服务器似乎需要一个
连接:keep alive
头,例如curl(我也希望其他失败的客户端)在默认情况下不会添加

使用curl可以使用以下命令,该命令将显示非空响应:

>>> import urllib.request
>>> content = urllib.request.urlopen('http://www.comicbookdb.com/browse.php')
>>> print(content.read())
b''
curl -v -H 'Connection: keep-alive' http://www.comicbookdb.com/browse.php
使用Python,您可以使用以下代码:

import httplib2
h = httplib2.Http('.cache')
response, content = h.request('http://www.comicbookdb.com/browse.php', headers={'Connection':'keep-alive'})
print(content)
print(response)

这就成功了,谢谢!你知道为什么服务器会期望一个URL的头,而不是另一个URL的头吗?我猜是PHP脚本上游的一些特定配置——可能是缓存服务器——因为我看不出PHP脚本本身会受到什么影响,但是我恐怕不能提供任何好主意。我也有同样的问题,但是设置连接头没有任何作用。