Python 3.x 从URL下载文件时指定输出路径

Python 3.x 从URL下载文件时指定输出路径,python-3.x,python-requests,shutil,Python 3.x,Python Requests,Shutil,我有一些文件是从url下载的 我当前可以像这样访问我的文件: import requests from bs4 import BeautifulSoup import os prefix = 'https://n5eil01u.ecs.nsidc.org/MOST/MOD10A1.006/' download_url = "https:/path_to_website" s = requests.session()

我有一些文件是从url下载的

我当前可以像这样访问我的文件:

import requests
from bs4 import BeautifulSoup
import os

prefix = 'https://n5eil01u.ecs.nsidc.org/MOST/MOD10A1.006/'
download_url = "https:/path_to_website"

s = requests.session()                                                         
soup = BeautifulSoup(s.get(download_url).text, "lxml")  

for a in soup.find_all('a', href=True):

     final_link = os.path.join(prefix, a['href'])
     result = s.get(final_link, stream = True)
     with open(a['href'], 'wb') as out_file:
          shutil.copyfileobj(result.raw, out_file)
out_path = "C:/my_path"
prefix = 'https://n5eil01u.ecs.nsidc.org/MOST/MOD10A1.006/'

s = requests.session()                                                         
soup = BeautifulSoup(s.get(download_url).text, "lxml")  

for a in page.find_all('a', href=True):

     final_link = os.path.join(prefix, a['href'])
     download = wget.download(final_link, out = out_path)
这将很好地下载文件,并将其放入C:/User的默认目录中

但我想选择在哪里下载我的文件。您可以使用
wget
选择输出路径的位置,但使用我的方法可以下载空文件,就好像它们没有被访问一样

我用
wget
这样做:

import requests
from bs4 import BeautifulSoup
import os

prefix = 'https://n5eil01u.ecs.nsidc.org/MOST/MOD10A1.006/'
download_url = "https:/path_to_website"

s = requests.session()                                                         
soup = BeautifulSoup(s.get(download_url).text, "lxml")  

for a in soup.find_all('a', href=True):

     final_link = os.path.join(prefix, a['href'])
     result = s.get(final_link, stream = True)
     with open(a['href'], 'wb') as out_file:
          shutil.copyfileobj(result.raw, out_file)
out_path = "C:/my_path"
prefix = 'https://n5eil01u.ecs.nsidc.org/MOST/MOD10A1.006/'

s = requests.session()                                                         
soup = BeautifulSoup(s.get(download_url).text, "lxml")  

for a in page.find_all('a', href=True):

     final_link = os.path.join(prefix, a['href'])
     download = wget.download(final_link, out = out_path)

我认为wget不起作用,因为我正在使用身份验证访问网站(未显示),当我加入最后一个链接时,我不再使用身份验证访问它。有没有办法用shutil指定输出路径?

使用第一种方法,将打开的文件的路径替换为
os.path.join(out_path,a['href'])


您可以创建如下所示的目标路径

target_path = r'c:\windows\temp'
with open(os.path.join(target_path, a['href']), 'wb') as out_file:
    shutil.copyfileobj(result.raw, out_file)

啊,我明白了。非常感谢。