Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/assembly/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何通过HTTP下载文件?_Python_Http_Urllib - Fatal编程技术网

Python 如何通过HTTP下载文件?

Python 如何通过HTTP下载文件?,python,http,urllib,Python,Http,Urllib,我有一个小实用程序,我用它按计划从网站下载MP3文件,然后构建/更新我添加到iTunes的播客XML文件 创建/更新XML文件的文本处理是用Python编写的。但是,我在Windows.bat文件中使用wget来下载实际的MP3文件。我更喜欢用Python编写整个实用程序 我很难找到一种用Python实际下载文件的方法,因此我求助于使用wget 那么,如何使用Python下载文件呢?使用: 这是使用库的最基本方法,不包括任何错误处理。您还可以执行更复杂的操作,例如更改标题 在Python 2上,

我有一个小实用程序,我用它按计划从网站下载MP3文件,然后构建/更新我添加到iTunes的播客XML文件

创建/更新XML文件的文本处理是用Python编写的。但是,我在Windows
.bat
文件中使用wget来下载实际的MP3文件。我更喜欢用Python编写整个实用程序

我很难找到一种用Python实际下载文件的方法,因此我求助于使用
wget

那么,如何使用Python下载文件呢?

使用:

这是使用库的最基本方法,不包括任何错误处理。您还可以执行更复杂的操作,例如更改标题

在Python 2上,该方法位于:

导入urllib2
mp3file=urllib2.urlopen(“http://www.example.com/songs/mp3.mp3")
以open('test.mp3','wb')作为输出:
output.write(mp3file.read())

open('test.mp3','wb')
中的
wb
以二进制模式打开一个文件(并删除任何现有文件),这样您就可以用它来保存数据,而不仅仅是文本。

我同意Corey的观点,urllib2比您想做更复杂的事情时使用的模块更完整,而且应该使用它,但为了使答案更完整,如果您只需要基本功能,urllib是一个更简单的模块:

import urllib
response = urllib.urlopen('http://www.example.com/sound.mp3')
mp3 = response.read()
会很好的。或者,如果不想处理“response”对象,可以直接调用read()

import urllib
mp3 = urllib.urlopen('http://www.example.com/sound.mp3').read()
还有一个,使用:


(对于Python3+使用
导入urllib.request
urllib.request.urlretrieve

还有一个,带有“progressbar”

2012年,使用

您可以运行
pip安装请求来获取它

请求与替代方案相比有许多优点,因为API要简单得多。如果您必须进行身份验证,则尤其如此。在这种情况下,urllib和urllib2是非常不直观和痛苦的


2015-12-30

人们对进度条表示钦佩。很酷,当然。现在有几种现成的解决方案,包括
tqdm

from tqdm import tqdm
import requests

url = "http://download.thinkbroadband.com/10MB.zip"
response = requests.get(url, stream=True)

with open("10MB", "wb") as handle:
    for data in tqdm(response.iter_content()):
        handle.write(data)

这实际上是30个月前描述的@kvance实现。

Python 2/3的PabloG代码的改进版本:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import ( division, absolute_import, print_function, unicode_literals )

import sys, os, tempfile, logging

if sys.version_info >= (3,):
    import urllib.request as urllib2
    import urllib.parse as urlparse
else:
    import urllib2
    import urlparse

def download_file(url, dest=None):
    """ 
    Download and save a file specified by url to dest directory,
    """
    u = urllib2.urlopen(url)

    scheme, netloc, path, query, fragment = urlparse.urlsplit(url)
    filename = os.path.basename(path)
    if not filename:
        filename = 'downloaded.file'
    if dest:
        filename = os.path.join(dest, filename)

    with open(filename, 'wb') as f:
        meta = u.info()
        meta_func = meta.getheaders if hasattr(meta, 'getheaders') else meta.get_all
        meta_length = meta_func("Content-Length")
        file_size = None
        if meta_length:
            file_size = int(meta_length[0])
        print("Downloading: {0} Bytes: {1}".format(url, file_size))

        file_size_dl = 0
        block_sz = 8192
        while True:
            buffer = u.read(block_sz)
            if not buffer:
                break

            file_size_dl += len(buffer)
            f.write(buffer)

            status = "{0:16}".format(file_size_dl)
            if file_size:
                status += "   [{0:6.2f}%]".format(file_size_dl * 100 / file_size)
            status += chr(13)
            print(status, end="")
        print()

    return filename

if __name__ == "__main__":  # Only run if this file is called directly
    print("Testing with 10MB download")
    url = "http://download.thinkbroadband.com/10MB.zip"
    filename = download_file(url)
    print(filename)

用纯Python编写的库就是为了这个目的。从2.0版开始,它就开始运行了。

这可能有点晚了,但我看到了pabloG的代码,忍不住添加了一个os.system('cls'),让它看起来很棒!请查看:

    import urllib2,os

    url = "http://download.thinkbroadband.com/10MB.zip"

    file_name = url.split('/')[-1]
    u = urllib2.urlopen(url)
    f = open(file_name, 'wb')
    meta = u.info()
    file_size = int(meta.getheaders("Content-Length")[0])
    print "Downloading: %s Bytes: %s" % (file_name, file_size)
    os.system('cls')
    file_size_dl = 0
    block_sz = 8192
    while True:
        buffer = u.read(block_sz)
        if not buffer:
            break

        file_size_dl += len(buffer)
        f.write(buffer)
        status = r"%10d  [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
        status = status + chr(8)*(len(status)+1)
        print status,

    f.close()
如果在Windows以外的环境中运行,则必须使用“cls”以外的其他工具。在MAC OS X和Linux中,应该是“清除”的。

源代码可以是:

import urllib
sock = urllib.urlopen("http://diveintopython.org/")
htmlSource = sock.read()                            
sock.close()                                        
print htmlSource  

您还可以通过urlretrieve获得进度反馈:

def report(blocknr, blocksize, size):
    current = blocknr*blocksize
    sys.stdout.write("\r{0:.2f}%".format(100.0*current/size))

def downloadFile(url):
    print "\n",url
    fname = url.split('/')[-1]
    print fname
    urllib.urlretrieve(url, fname, report)
使用:

Python 3

  • 注意:根据文档,
    urllib.request.urlretrieve
    是一个“遗留接口”,并且“将来可能会被弃用”(谢谢)

Python 2
  • (谢谢)

  • (谢谢)


如果已安装wget,则可以使用并行同步

安装并行同步

from parallel_sync import wget
urls = ['http://something.png', 'http://somthing.tar.gz', 'http://somthing.zip']
wget.download('/tmp', urls)
# or a single file:
wget.download('/tmp', urls[0], filenames='x.zip', extract=True)
文件:


这是相当强大的。它可以并行下载文件,失败后重试,甚至可以在远程机器上下载文件。

以下是在python中下载文件最常用的调用:

  • urllib.urlretrieve('url\u to\u file',file\u name)

  • urllib2.urlopen('url\u to\u file')

  • requests.get(url)

  • wget.download('url',文件名)


  • 注意:
    urlopen
    urlretrieve
    在下载大文件(大小>500 MB)时表现相对较差<代码>请求。get
    将文件存储在内存中,直到下载完成

    urlretrieve和requests.get很简单,但实际情况并非如此。 我已经获取了两个站点的数据,包括文本和图像,上面两个可能解决了大部分任务。但对于更普遍的解决方案,我建议使用urlopen。由于它包含在Python3标准库中,您的代码可以在运行Python3的任何机器上运行,而无需预先安装站点包

    import urllib.request
    url_request = urllib.request.Request(url, headers=headers)
    url_connect = urllib.request.urlopen(url_request)
    
    #remember to open file in bytes mode
    with open(filename, 'wb') as f:
        while True:
            buffer = url_connect.read(buffer_size)
            if not buffer: break
    
            #an integer value of size of written data
            data_wrote = f.write(buffer)
    
    #you could probably use with-open-as manner
    url_connect.close()
    

    这个答案提供了一个解决方案,解决了在使用Python通过HTTP下载文件时禁止使用HTTP 403的问题。我只尝试了请求和urllib模块,另一个模块可能会提供更好的功能,但这是我用来解决大多数问题的模块。

    我编写了以下内容,它适用于普通Python 2或Python 3



    注:

    • 支持“进度条”回调
    • 从我的网站下载是一个4MB的test.zip

    简单但
    Python 2和Python 3
    兼容方式附带
    six
    库:

    from six.moves import urllib
    urllib.request.urlretrieve("http://www.example.com/songs/mp3.mp3", "mp3.mp3")
    

    如果速度对您很重要,我对模块
    urllib
    wget
    进行了一次小型性能测试,关于
    wget
    ,我使用状态栏和不使用状态栏分别进行了一次测试。我使用了三个不同的500MB文件进行测试(不同的文件-以消除引擎盖下进行缓存的可能性)。在debian机器上测试,使用python2

    首先,这些是结果(它们在不同的运行中是相似的):

    我执行测试的方式是使用“profile”装饰器。这是完整的代码:

    import wget
    import urllib
    import time
    from functools import wraps
    
    def profile(func):
        @wraps(func)
        def inner(*args):
            print func.__name__, ": starting"
            start = time.time()
            ret = func(*args)
            end = time.time()
            print func.__name__, ": {:.2f}".format(end - start)
            return ret
        return inner
    
    url1 = 'http://host.com/500a.iso'
    url2 = 'http://host.com/500b.iso'
    url3 = 'http://host.com/500c.iso'
    
    def do_nothing(*args):
        pass
    
    @profile
    def urlretrive_test(url):
        return urllib.urlretrieve(url)
    
    @profile
    def wget_no_bar_test(url):
        return wget.download(url, out='/tmp/', bar=do_nothing)
    
    @profile
    def wget_with_bar_test(url):
        return wget.download(url, out='/tmp/')
    
    urlretrive_test(url1)
    print '=============='
    time.sleep(1)
    
    wget_no_bar_test(url2)
    print '=============='
    time.sleep(1)
    
    wget_with_bar_test(url3)
    print '=============='
    time.sleep(1)
    

    urllib
    似乎是python3中最快的

    你可以使用urllib3和shutil库。 使用pip或pip3下载它们(取决于python3是否为默认值)

    然后运行此代码

    import urllib.request
    import shutil
    
    url = "http://www.somewebsite.com/something.pdf"
    output_file = "save_this_name.pdf"
    with urllib.request.urlopen(url) as response, open(output_file, 'wb') as out_file:
        shutil.copyfileobj(response, out_file)
    
    请注意,您可以下载
    urllib3
    ,但可以在Python 2和3上使用的代码中使用
    urllib

    import pycurl
    
    FILE_DEST = 'pycurl.html'
    FILE_SRC = 'http://pycurl.io/'
    
    with open(FILE_DEST, 'wb') as f:
        c = pycurl.Curl()
        c.setopt(c.URL, FILE_SRC)
        c.setopt(c.WRITEDATA, f)
        c.perform()
        c.close()
    

    为了完整起见,还可以使用
    子流程
    包调用任何程序来检索文件。专用于检索文件的程序比Python函数(如
    import urllib.request
    response = urllib.request.urlopen('http://www.example.com/')
    html = response.read()
    
    import urllib.request
    urllib.request.urlretrieve('http://www.example.com/songs/mp3.mp3', 'mp3.mp3')
    
    import urllib2
    response = urllib2.urlopen('http://www.example.com/')
    html = response.read()
    
    import urllib
    urllib.urlretrieve('http://www.example.com/songs/mp3.mp3', 'mp3.mp3')
    
    from parallel_sync import wget
    urls = ['http://something.png', 'http://somthing.tar.gz', 'http://somthing.zip']
    wget.download('/tmp', urls)
    # or a single file:
    wget.download('/tmp', urls[0], filenames='x.zip', extract=True)
    
    import urllib.request
    url_request = urllib.request.Request(url, headers=headers)
    url_connect = urllib.request.urlopen(url_request)
    
    #remember to open file in bytes mode
    with open(filename, 'wb') as f:
        while True:
            buffer = url_connect.read(buffer_size)
            if not buffer: break
    
            #an integer value of size of written data
            data_wrote = f.write(buffer)
    
    #you could probably use with-open-as manner
    url_connect.close()
    
    import sys
    try:
        import urllib.request
        python3 = True
    except ImportError:
        import urllib2
        python3 = False
    
    
    def progress_callback_simple(downloaded,total):
        sys.stdout.write(
            "\r" +
            (len(str(total))-len(str(downloaded)))*" " + str(downloaded) + "/%d"%total +
            " [%3.2f%%]"%(100.0*float(downloaded)/float(total))
        )
        sys.stdout.flush()
    
    def download(srcurl, dstfilepath, progress_callback=None, block_size=8192):
        def _download_helper(response, out_file, file_size):
            if progress_callback!=None: progress_callback(0,file_size)
            if block_size == None:
                buffer = response.read()
                out_file.write(buffer)
    
                if progress_callback!=None: progress_callback(file_size,file_size)
            else:
                file_size_dl = 0
                while True:
                    buffer = response.read(block_size)
                    if not buffer: break
    
                    file_size_dl += len(buffer)
                    out_file.write(buffer)
    
                    if progress_callback!=None: progress_callback(file_size_dl,file_size)
        with open(dstfilepath,"wb") as out_file:
            if python3:
                with urllib.request.urlopen(srcurl) as response:
                    file_size = int(response.getheader("Content-Length"))
                    _download_helper(response,out_file,file_size)
            else:
                response = urllib2.urlopen(srcurl)
                meta = response.info()
                file_size = int(meta.getheaders("Content-Length")[0])
                _download_helper(response,out_file,file_size)
    
    import traceback
    try:
        download(
            "https://geometrian.com/data/programming/projects/glLib/glLib%20Reloaded%200.5.9/0.5.9.zip",
            "output.zip",
            progress_callback_simple
        )
    except:
        traceback.print_exc()
        input()
    
    from six.moves import urllib
    urllib.request.urlretrieve("http://www.example.com/songs/mp3.mp3", "mp3.mp3")
    
    $ python wget_test.py 
    urlretrive_test : starting
    urlretrive_test : 6.56
    ==============
    wget_no_bar_test : starting
    wget_no_bar_test : 7.20
    ==============
    wget_with_bar_test : starting
    100% [......................................................................] 541335552 / 541335552
    wget_with_bar_test : 50.49
    ==============
    
    import wget
    import urllib
    import time
    from functools import wraps
    
    def profile(func):
        @wraps(func)
        def inner(*args):
            print func.__name__, ": starting"
            start = time.time()
            ret = func(*args)
            end = time.time()
            print func.__name__, ": {:.2f}".format(end - start)
            return ret
        return inner
    
    url1 = 'http://host.com/500a.iso'
    url2 = 'http://host.com/500b.iso'
    url3 = 'http://host.com/500c.iso'
    
    def do_nothing(*args):
        pass
    
    @profile
    def urlretrive_test(url):
        return urllib.urlretrieve(url)
    
    @profile
    def wget_no_bar_test(url):
        return wget.download(url, out='/tmp/', bar=do_nothing)
    
    @profile
    def wget_with_bar_test(url):
        return wget.download(url, out='/tmp/')
    
    urlretrive_test(url1)
    print '=============='
    time.sleep(1)
    
    wget_no_bar_test(url2)
    print '=============='
    time.sleep(1)
    
    wget_with_bar_test(url3)
    print '=============='
    time.sleep(1)
    
    pip3 install urllib3 shutil
    
    import urllib.request
    import shutil
    
    url = "http://www.somewebsite.com/something.pdf"
    output_file = "save_this_name.pdf"
    with urllib.request.urlopen(url) as response, open(output_file, 'wb') as out_file:
        shutil.copyfileobj(response, out_file)
    
    import pycurl
    
    FILE_DEST = 'pycurl.html'
    FILE_SRC = 'http://pycurl.io/'
    
    with open(FILE_DEST, 'wb') as f:
        c = pycurl.Curl()
        c.setopt(c.URL, FILE_SRC)
        c.setopt(c.WRITEDATA, f)
        c.perform()
        c.close()
    
    import subprocess
    subprocess.check_output(['wget', '-O', 'example_output_file.html', 'https://example.com'])
    
    !wget -O example_output_file.html https://example.com
    
    import os,requests
    def download(url):
        get_response = requests.get(url,stream=True)
        file_name  = url.split("/")[-1]
        with open(file_name, 'wb') as f:
            for chunk in get_response.iter_content(chunk_size=1024):
                if chunk: # filter out keep-alive new chunks
                    f.write(chunk)
    
    
    download("https://example.com/example.jpg")
    
    import dload
    dload.save(url)
    
    pip3 install dload
    
    from subprocess import call
    url = ""
    call(["curl", {url}, '--output', "song.mp3"])