Python 3.x 在python 3中使用urllib抓取图像时出现HTTP错误

Python 3.x 在python 3中使用urllib抓取图像时出现HTTP错误,python-3.x,urllib,Python 3.x,Urllib,我有一个URL列表,我正在使用下面的代码从网站上抓取图像,使用python3中的urllib i=0 all_image_links=[] r=requests.get(urllink) data=r.text soup=BeautifulSoup(data,"lxml") name=soup.find('title') name=name.text for link in soup.find_all('img'): image_link=link.get('src') fina

我有一个URL列表,我正在使用下面的代码从网站上抓取图像,使用python3中的urllib

i=0
all_image_links=[]
r=requests.get(urllink)
data=r.text
soup=BeautifulSoup(data,"lxml")
name=soup.find('title')
name=name.text
for link in soup.find_all('img'):
    image_link=link.get('src')
    final_link=urllink+image_link
    all_image_links.append(final_link)
for each in all_image_links:
    urllib.request.urlretrieve(each,name+str(i))
    i=i+1
我遇到以下错误:

Traceback (most recent call last):
  File "j1.py", line 91, in <module>
    import_personal_images(each)
  File "j1.py", line 63, in import_personal_images
    urllib.request.urlretrieve(each,name+str(i))
  File "/usr/lib/python3.5/urllib/request.py", line 188, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "/usr/lib/python3.5/urllib/request.py", line 163, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python3.5/urllib/request.py", line 472, in open
    response = meth(req, response)
  File "/usr/lib/python3.5/urllib/request.py", line 582, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python3.5/urllib/request.py", line 510, in error
    return self._call_chain(*args)
  File "/usr/lib/python3.5/urllib/request.py", line 444, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.5/urllib/request.py", line 590, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
我仍然会犯同样的错误。有人能解释一下我的代码哪里不正确吗?

-服务器理解请求,但由于与授权无关的原因而不会执行请求

服务器主动拒绝您访问此文件。您已超过速率限制,或者未登录,正在尝试访问特权资源

除非是授权/身份验证类型的代码,否则任何代码都无法解决此错误

1):

all_image_links=[]
i=0
req = Request(urllink, headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req).read()
r=webpage.decode('utf-8')
soup=BeautifulSoup(r,"lxml")
for link in soup.find_all('img'):
    image_link=link.get('src')
    all_image_links.append(urllink+image_link)
for each in all_image_links:
    urllib.request.urlretrieve(each,str(i))
    i=i+1

2):

all_image_links=[]
i=0
headers = {'User-Agent':'Mozilla/5.0'}
page = requests.get(urllink)
soup = BeautifulSoup(page.text, "html.parser")
for link in soup.find_all('img'):
    image_link=link.get('src')
    print(image_link)
    all_image_links.append(urllink+image_link)
for each in all_image_links:
    urllib.request.urlretrieve(each,str(i))
    i=i+1