Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
使用Python 3从web下载文件_Python_Python 3.x - Fatal编程技术网

使用Python 3从web下载文件

使用Python 3从web下载文件,python,python-3.x,Python,Python 3.x,我正在创建一个程序,通过读取同一游戏/应用程序的.jad文件中指定的URL,从web服务器下载.jar(java)文件。我正在使用Python 3.2.1 我已经设法从JAD文件中提取JAR文件的URL(每个JAD文件都包含JAR文件的URL),但是正如您所想象的,提取的值是type()字符串 以下是相关功能: def downloadFile(URL=None): import httplib2 h = httplib2.Http(".cache") resp, con

我正在创建一个程序,通过读取同一游戏/应用程序的.jad文件中指定的URL,从web服务器下载.jar(java)文件。我正在使用Python 3.2.1

我已经设法从JAD文件中提取JAR文件的URL(每个JAD文件都包含JAR文件的URL),但是正如您所想象的,提取的值是type()字符串

以下是相关功能:

def downloadFile(URL=None):
    import httplib2
    h = httplib2.Http(".cache")
    resp, content = h.request(URL, "GET")
    return content

downloadFile(URL_from_file)
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve(url, file_name)
然而,我总是得到一个错误,上面函数中的类型必须是字节,而不是字符串。我尝试过使用URL.encode('utf-8')和bytes(URL,encoding='utf-8'),但我总是会遇到相同或类似的错误


因此,基本上我的问题是,当URL存储在字符串类型中时,如何从服务器下载文件?

如果要将网页内容获取到变量中,只需读取以下响应:


下载和保存文件的最简单方法是使用以下功能:

def downloadFile(URL=None):
    import httplib2
    h = httplib2.Http(".cache")
    resp, content = h.request(URL, "GET")
    return content

downloadFile(URL_from_file)
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve(url, file_name)
但请记住,
urlretrieve
已被考虑,可能会被弃用(但不确定原因)

因此,最正确的方法是使用函数返回一个类似文件的对象,该对象表示HTTP响应,并使用

如果这看起来太复杂,您可能希望更简单一些,将整个下载存储在
bytes
对象中,然后将其写入文件。但这只适用于小文件

import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
    data = response.read() # a `bytes` object
    out_file.write(data)

可以动态提取
.gz
(可能还有其他格式)压缩数据,但这种操作可能需要HTTP服务器支持对文件的随机访问

import urllib.request
import gzip
...
# Read the first 64 bytes of the file inside the .gz archive located at `url`
url = 'http://example.com/something.gz'
with urllib.request.urlopen(url) as response:
    with gzip.GzipFile(fileobj=response) as uncompressed:
        file_header = uncompressed.read(64) # a `bytes` object
        # Or do anything shown above using `uncompressed` instead of `response`.

我希望我正确理解了这个问题,即:当URL存储在字符串类型中时,如何从服务器下载文件

我下载文件并使用以下代码将其保存在本地:

import requests

url = 'https://www.python.org/static/img/python-logo.png'
fileName = 'D:\Python\dwnldPythonLogo.png'
req = requests.get(url)
file = open(fileName, 'wb')
for chunk in req.iter_content(100000):
    file.write(chunk)
file.close()

每当我想要与HTTP请求相关的东西时,我都会使用
请求
包,因为它的API非常容易启动:

首先,安装
请求

$ pip install requests
然后代码:

from requests import get  # to make GET request


def download(url, file_name):
    # open in binary mode
    with open(file_name, "wb") as file:
        # get request
        response = get(url)
        # write to file
        file.write(response.content)

您可以使用wget,这是一种流行的下载shell工具。 这将是最简单的方法,因为它不需要打开目标文件。这里有一个例子

import wget
url = 'https://i1.wp.com/python3.codes/wp-content/uploads/2015/06/Python3-powered.png?fit=650%2C350'  
wget.download(url, '/Users/scott/Downloads/cat4.jpg') 

在这里,我们可以在Python3中使用urllib的遗留接口:

以下函数和类是从Python 2模块urllib(与urllib2相反)移植的。在将来的某个时候,它们可能会被弃用

示例(两行代码)

import urllib.request

url = 'https://www.python.org/static/img/python-logo.png'
urllib.request.urlretrieve(url, "logo.png")

是的,definietly requests是一个很好的包,可以在与HTTP请求相关的东西中使用。但我们需要注意传入数据的编码类型,下面是一个解释差异的示例


from requests import get

# case when the response is byte array
url = 'some_image_url'

response = get(url)
with open('output', 'wb') as file:
    file.write(response.content)


# case when the response is text
# Here unlikely if the reponse content is of type **iso-8859-1** we will have to override the response encoding
url = 'some_page_url'

response = get(url)
# override encoding by real educated guess as provided by chardet
r.encoding = r.apparent_encoding

with open('output', 'w', encoding='utf-8') as file:
    file.write(response.content)

动机 有时,我们想得到图片,但不需要下载到真实的文件

i、 例如,下载数据并保存在内存中。

例如,如果我使用机器学习方法,训练一个能够识别带有数字(条形码)的图像的模型

当我抓取一些有这些图像的网站时,我可以使用模型来识别它

我不想把那些照片保存在我的硬盘上

然后,您可以尝试以下方法来帮助您将下载的数据保存在内存中

要点 基本上,就像是@Ranvijay Kumar

一个例子
导入请求
从键入import NewType,TypeVar
从io导入StringIO,字节io
将matplotlib.pyplot作为plt导入
导入图像
URL=NewType('URL',str)
T_IO=TypeVar('T_IO',StringIO,BytesIO)
def下载并在内存中保存(url:url,标题=None,超时=None,**选项)->T\U IO:
chunk_size=option.get('chunk_size',4096)#默认4KB
max_size=1024**2*选项。获取('max_size',-1)#MB,默认值将忽略。
response=requests.get(url,headers=headers,timeout=timeout)
if response.status_代码!=200:
raise requests.ConnectionError(f'{response.status_code}')
实例io=StringIO,如果是实例(下一个(response.iter\u content(chunk\u size=1)),str)或其他字节io
io_obj=实例_io()
电流大小=0
对于响应中的块。iter\u内容(块大小=块大小):
当前大小+=块大小
如果0<最大大小<当前大小:
打破
io_对象写入(块)
io_对象搜索(0)
“”“将其保存到真实文件。
打开('temp.png',mode='wb')作为输出:
输出写入(io对象读取())
"""
返回io_obj
def main():
标题={
“接受”:“text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed exchange;v=b3”,
“接受编码”:“gzip,deflate”,
‘接受语言’:‘zh-TW,zh;q=0.9,en-US;q=0.8,en;q=0.7’,
“缓存控制”:“最大年龄=0”,
“连接”:“保持活动状态”,
“主机”:“statics.591.com.tw”,
“升级不安全请求”:“1”,
“用户代理”:“Mozilla/5.0(Windows NT 10.0;Win64;x64)AppleWebKit/537.36(KHTML,类似Gecko)Chrome/78.0.3904.87 Safari/537.36”
}
io\u img=下载并保存内存(URL('http://statics.591.com.tw/tools/showPhone.php?info_data=rLsGZe4U%2FbphHOimi2PT%2FhxTPqI&type=rLEFMu4XrrpgEw'),
标题,#您可能需要这个。否则,一些网站会向您发送404错误。
最大_尺寸=4)#最大负载<4MB
使用io_img:
plt.rc('axes.spines',top=False,bottom=False,left=False,right=False)
plt.rc(('xtick','ytick'),颜色=(1,1,1,0))#与plt.axis('off')相同
plt.imshow(imageio.imread(io\u img,as\u gray=False,pilmode=“RGB”))
plt.show()
如果uuuu name uuuuuu='\uuuuuuu main\uuuuuuu':
main()

如果您使用的是Linux,则可以通过python shell使用Linux的
wget
模块。下面是一个示例代码片段

import os
url = 'http://www.example.com/foo.zip'
os.system('wget %s'%url)

您可以使用
response.info()-

from requests import get

# case when the response is byte array
url = 'some_image_url'

response = get(url)
with open('output', 'wb') as file:
    file.write(response.content)


# case when the response is text
# Here unlikely if the reponse content is of type **iso-8859-1** we will have to override the response encoding
url = 'some_page_url'

response = get(url)
# override encoding by real educated guess as provided by chardet
r.encoding = r.apparent_encoding

with open('output', 'w', encoding='utf-8') as file:
    file.write(response.content)

import requests
from io import BytesIO
response = requests.get(url)
with BytesIO as io_obj:
    for chunk in response.iter_content(chunk_size=4096):
        io_obj.write(chunk)
import requests
from typing import NewType, TypeVar
from io import StringIO, BytesIO
import matplotlib.pyplot as plt
import imageio

URL = NewType('URL', str)
T_IO = TypeVar('T_IO', StringIO, BytesIO)


def download_and_keep_on_memory(url: URL, headers=None, timeout=None, **option) -> T_IO:
    chunk_size = option.get('chunk_size', 4096)  # default 4KB
    max_size = 1024 ** 2 * option.get('max_size', -1)  # MB, default will ignore.
    response = requests.get(url, headers=headers, timeout=timeout)
    if response.status_code != 200:
        raise requests.ConnectionError(f'{response.status_code}')

    instance_io = StringIO if isinstance(next(response.iter_content(chunk_size=1)), str) else BytesIO
    io_obj = instance_io()
    cur_size = 0
    for chunk in response.iter_content(chunk_size=chunk_size):
        cur_size += chunk_size
        if 0 < max_size < cur_size:
            break
        io_obj.write(chunk)
    io_obj.seek(0)
    """ save it to real file.
    with open('temp.png', mode='wb') as out_f:
        out_f.write(io_obj.read())
    """
    return io_obj


def main():
    headers = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
        'Accept-Encoding': 'gzip, deflate',
        'Accept-Language': 'zh-TW,zh;q=0.9,en-US;q=0.8,en;q=0.7',
        'Cache-Control': 'max-age=0',
        'Connection': 'keep-alive',
        'Host': 'statics.591.com.tw',
        'Upgrade-Insecure-Requests': '1',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36'
    }
    io_img = download_and_keep_on_memory(URL('http://statics.591.com.tw/tools/showPhone.php?info_data=rLsGZe4U%2FbphHOimi2PT%2FhxTPqI&type=rLEFMu4XrrpgEw'),
                                         headers,  # You may need this. Otherwise, some websites will send the 404 error to you.
                                         max_size=4)  # max loading < 4MB
    with io_img:
        plt.rc('axes.spines', top=False, bottom=False, left=False, right=False)
        plt.rc(('xtick', 'ytick'), color=(1, 1, 1, 0))  # same of plt.axis('off')
        plt.imshow(imageio.imread(io_img, as_gray=False, pilmode="RGB"))
        plt.show()


if __name__ == '__main__':
    main()

import os
url = 'http://www.example.com/foo.zip'
os.system('wget %s'%url)