Python 3.x HTTPError:HTTP错误403:在从Python3中的链接下载csv文件期间定义标头时,将返回禁止或无
请告知我如何从下载Python3 csv文件 我的csv文件的已删除链接:Python 3.x HTTPError:HTTP错误403:在从Python3中的链接下载csv文件期间定义标头时,将返回禁止或无,python-3.x,web-scraping,jupyter-notebook,wget,urllib,Python 3.x,Web Scraping,Jupyter Notebook,Wget,Urllib,请告知我如何从下载Python3 csv文件 我的csv文件的已删除链接: csv_link = ['/data-and-analysis/finances/table-2.csv', '/data-and-analysis/finances/table-3.csv','/data-and-analysis/finances/table-3s.csv','/data-and-analysis/finances/table-4.csv','/data-and-analysis/finances/t
csv_link = ['/data-and-analysis/finances/table-2.csv', '/data-and-analysis/finances/table-3.csv','/data-and-analysis/finances/table-3s.csv','/data-and-analysis/finances/table-4.csv','/data-and-analysis/finances/table-9.csv','/data-and-analysis/finances/table-10.csv']
我要下载的代码
import wget
for link in csv_link:
full_link = 'https://www.hesa.ac.uk' + link
print(print(full_link))
wget.download(full_link)
接收403错误:
https://www.hesa.ac.uk/data-and-analysis/finances/table-2.csv
None
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
<ipython-input-7-6d016e0bdd56> in <module>
3 full_link = 'https://www.hesa.ac.uk' + link
4 print(print(full_link))
----> 5 wget.download(full_link)
6
/usr/local/lib/python3.7/dist-packages/wget.py in download(url, out, bar)
524 else:
525 binurl = url
--> 526 (tmpfile, headers) = ulib.urlretrieve(binurl, tmpfile, callback)
527 filename = detect_filename(url, out, headers)
528 if outdir:
/usr/lib/python3.7/urllib/request.py in urlretrieve(url, filename, reporthook, data)
245 url_type, path = splittype(url)
246
--> 247 with contextlib.closing(urlopen(url, data)) as fp:
248 headers = fp.info()
249
/usr/lib/python3.7/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
220 else:
221 opener = _opener
--> 222 return opener.open(url, data, timeout)
223
224 def install_opener(opener):
/usr/lib/python3.7/urllib/request.py in open(self, fullurl, data, timeout)
529 for processor in self.process_response.get(protocol, []):
530 meth = getattr(processor, meth_name)
--> 531 response = meth(req, response)
532
533 return response
/usr/lib/python3.7/urllib/request.py in http_response(self, request, response)
639 if not (200 <= code < 300):
640 response = self.parent.error(
--> 641 'http', request, response, code, msg, hdrs)
642
643 return response
/usr/lib/python3.7/urllib/request.py in error(self, proto, *args)
567 if http_err:
568 args = (dict, 'default', 'http_error_default') + orig_args
--> 569 return self._call_chain(*args)
570
571 # XXX probably also want an abstract factory that knows when it makes
/usr/lib/python3.7/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args)
501 for handler in handlers:
502 func = getattr(handler, meth_name)
--> 503 result = func(*args)
504 if result is not None:
505 return result
/usr/lib/python3.7/urllib/request.py in http_error_default(self, req, fp, code, msg, hdrs)
647 class HTTPDefaultErrorHandler(BaseHandler):
648 def http_error_default(self, req, fp, code, msg, hdrs):
--> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp)
650
651 class HTTPRedirectHandler(BaseHandler):
HTTPError: HTTP Error 403: Forbidden
请告知如何更改我的代码,以便我可以下载我的文件。我还真的想知道,在Python3中,使用Juputer笔记本从刮擦的链接下载文件的正确方法是什么 我是在操作系统的帮助下完成的。仍然在寻找一种合适的方法来实现这一点,但下面是我的代码,它暂时解决了我的问题
import os
for link in csv_link:
full_url = 'https://www.hesa.ac.uk' + link
os.system('wget ' + full_url)
我让它在操作系统的帮助下工作。仍然在寻找一种合适的方法来实现这一点,但下面是我的代码,它暂时解决了我的问题
import os
for link in csv_link:
full_url = 'https://www.hesa.ac.uk' + link
os.system('wget ' + full_url)