Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/310.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python urllib2.urlopen读取的最佳块大小是多少?_Python_Urllib2 - Fatal编程技术网

Python urllib2.urlopen读取的最佳块大小是多少?

Python urllib2.urlopen读取的最佳块大小是多少?,python,urllib2,Python,Urllib2,我用这段代码下载mp3播客 req = urllib2.urlopen(item) CHUNK = 16 * 1024 with open(local_file, 'wb') as fp: while True: chunk = req.read(CHUNK) if not chunk: break fp.write(chunk) 哪一个工作得很好-但我想知道什么是获得最佳下载性能的最佳块大小 如果有区别的话,我使用的是6mbit ads

我用这段代码下载mp3播客

req = urllib2.urlopen(item)
CHUNK = 16 * 1024
with open(local_file, 'wb') as fp:
    while True:
        chunk = req.read(CHUNK)
        if not chunk: break
        fp.write(chunk)
哪一个工作得很好-但我想知道什么是获得最佳下载性能的最佳块大小


如果有区别的话,我使用的是6mbit adsl连接。

好的缓冲区大小与操作系统内核用于套接字缓冲区的大小相同。这样,您就不会执行超出应执行的读取次数


在GNU/Linux上,套接字缓冲区大小可以在
/proc/sys/net/core/rmem_default
文件中看到(大小以字节为单位)。 您可以增加套接字的缓冲区大小,使用
setsockopt
设置
sorcvbuf
参数。但是,此大小受您的系统限制(
/proc/sys/net/core/rmem_max
),您需要管理员权限(
CAP_net_admin
)才能超出此限制

在这一点上,您可能会做一些特定于平台的事情,以获得一点收益

不过,最好看看socket的选项(参见
man 7 socket
,)来执行微优化和学习一些东西


因为没有一个真正的最佳工作点,所以您应该始终对任何调整进行基准测试,以检查您的更改是否真正有益。玩得开心

进一步扩展我对@giant_teapot的评论

我用来做基准测试的代码是

#!/usr/bin/env python

import time
import os
import urllib2

#5mb mp3 file
testdl = "http://traffic.libsyn.com/timferriss/Arnold_5_min_-_final.mp3" 

chunkmulti = 1
numpass = 5

while (chunkmulti < 207):
    passtime = 0
    passattempt = 1
    while (passattempt <= numpass):
        start = time.time()
        req = urllib2.urlopen(testdl)
        CHUNK = chunkmulti * 1024
        with open("test.mp3", 'wb') as fp:
            while True:
                chunk = req.read(CHUNK)
                if not chunk: break
                fp.write(chunk)
        end = time.time()
        passtime += end - start
        os.remove("test.mp3")
        passattempt += 1
    print "Chunk size multiplier ", chunkmulti , " took ", passtime / passattempt, " seconds"
    chunkmulti += 1
结果一直如此,块大小高达207kb


所以我将块大小设置为6kb。下一步可能会尝试将其与wget进行基准测试…

这是一个好问题,但实际上不是特定于urllib2/python的。看看有没有一个很好的答案。你确定这需要优化吗?尝试基准测试。与wget:ing文件相比,/proc/sys/net/core/rmem_的默认值显示为212992-远远超出了我所考虑的缓冲区范围。最后,我在整个数据范围内(从1k到206k)做了一些基准测试。结果是:没有什么结论性的东西——你设定什么也不重要——差别可以忽略,也没有特定的模式。呵呵。无论如何值得一试。这个实验很有趣。:)当然,增益总是最小的(如果有的话),我认为您应该针对每个缓冲区大小运行多次下载,以获得更多的样本,从而获得更可靠的结果。在这方面,计算平均值和标准偏差是最相关的。玩得高兴o/
Chunk size multiplier  1  took  13.9629709721  seconds
Chunk size multiplier  2  took  8.01173728704  seconds
Chunk size multiplier  3  took  10.3750542402  seconds
Chunk size multiplier  4  took  7.11076325178  seconds
Chunk size multiplier  5  took  11.3685477376  seconds
Chunk size multiplier  6  took  6.86864703894  seconds
Chunk size multiplier  7  took  14.2680369616  seconds
Chunk size multiplier  8  took  7.93746650219  seconds
Chunk size multiplier  9  took  6.81188523769  seconds
Chunk size multiplier  10  took  7.54047352076  seconds
Chunk size multiplier  11  took  6.84347498417  seconds
Chunk size multiplier  12  took  7.88792568445  seconds
Chunk size multiplier  13  took  7.37244099379  seconds
Chunk size multiplier  14  took  8.15134423971  seconds
Chunk size multiplier  15  took  7.1664044857  seconds
Chunk size multiplier  16  took  10.9474172592  seconds
Chunk size multiplier  17  took  7.23868894577  seconds
Chunk size multiplier  18  took  7.66610199213  seconds