Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/http/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
奇怪的410 http使用python urllib在wget中不可复制_Python_Http_Httprequest_Urllib_Http Status Code 410 - Fatal编程技术网

奇怪的410 http使用python urllib在wget中不可复制

奇怪的410 http使用python urllib在wget中不可复制,python,http,httprequest,urllib,http-status-code-410,Python,Http,Httprequest,Urllib,Http Status Code 410,我正在python3中使用urllib从我的服务器获取一些图像: import urllib.request import urllib.error try: resp = urllib.request.urlopen(url) except urllib.error.HTTPError as err: print("code " + str(err.status) + " reason " + err.reaso

我正在
python3
中使用
urllib
从我的服务器获取一些图像:

import urllib.request
import urllib.error
        try:
            resp = urllib.request.urlopen(url)
        except urllib.error.HTTPError as err:
            print("code "  + str(err.status) + " reason " + err.reason)
运行该文件会输出410 HTTP Gone错误

 $ python3.6 file.py 

download: http://some_url.com/image.jpg
code 410 reason Gone

Traceback (most recent call last):
  File "file.py", line 32, in <module>
    image = image_from_url(url)
你知道这是什么原因吗?服务器端有什么问题吗?urllib请求中是否应该包含某些特定的头文件


谢谢你的请求:

GET /wikipedia/commons/c/c9/Moon.jpg HTTP/1.1
Accept-Encoding: identity
Host: upload.wikimedia.org
User-Agent: Python-urllib/3.6
Connection: close
GET /wikipedia/commons/c/c9/Moon.jpg HTTP/1.1
User-Agent: Wget/1.19.4 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: upload.wikimedia.org
Connection: Keep-Alive
wget
请求:

GET /wikipedia/commons/c/c9/Moon.jpg HTTP/1.1
Accept-Encoding: identity
Host: upload.wikimedia.org
User-Agent: Python-urllib/3.6
Connection: close
GET /wikipedia/commons/c/c9/Moon.jpg HTTP/1.1
User-Agent: Wget/1.19.4 (linux-gnu)
Accept: */*
Accept-Encoding: identity
Host: upload.wikimedia.org
Connection: Keep-Alive
是否尝试添加
Accept://*
标题?一些研究表明,过滤掉缺少此标头的请求是一种常见的做法,因为它们通常是机器人

req=urllib.request.request('some_url',headers={'Accept':'*/*'))
resp=urllib.request.urlopen(req)

好的,谢谢您的回复。原来丢失的头是用户代理。只需添加“用户代理”:“a”解决了问题。但不确定服务器为何显示这种行为