Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/297.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
按行读取的文件排序平均为Python 3_Python_Performance_File_Line - Fatal编程技术网

按行读取的文件排序平均为Python 3

按行读取的文件排序平均为Python 3,python,performance,file,line,Python,Performance,File,Line,我有一个输入文件:(姓、名、班、分数) 我需要根据班级对这些值进行分组,得出平均值 49.0 91.0 64.5 代码应该一行一行地读,我的代码正在运行,但是太慢了,我如何改进它 from collections import defaultdict from operator import itemgetter import numpy total = defaultdict(list) with open('input', 'r', encoding='utf8') as f:

我有一个输入文件:(姓、名、班、分数)

我需要根据班级对这些值进行分组,得出平均值

49.0 91.0 64.5
代码应该一行一行地读,我的代码正在运行,但是太慢了,我如何改进它

from collections import defaultdict
from operator import itemgetter

import numpy

total = defaultdict(list)
with open('input', 'r', encoding='utf8') as f:
    for row in f:
        _class, range = map(float, row.rsplit(None, 2)[-2:])
        total[_class].append(range)


print(*(numpy.mean(v) for k, v in sorted(total.items(), key=itemgetter(0))))

正如我在评论中提到的,在纯Python中没有多少方法可以加快速度。我有几个小的优化。第一个(
alt1
)不会将组标识符字符串强制转换为float(这是一个昂贵的操作)。第二个(
alt2
)使用带有预定义组的标准词典。第三个(
alt3
)使用列表而不是字典

from collections import defaultdict
from operator import itemgetter
import random
from io import StringIO 
import numpy as np

# random data for benchmarks 
data = '\n'.join('first last {} {}'.format(random.randrange(1, 12), random.random()) for _ in range(1000))

def base(handle):
    # This is your implementation
    total = defaultdict(list)
    for row in handle:
        _class, range = map(float, row.rsplit(None, 2)[-2:])
        total[_class].append(range)
    return [np.mean(v) for k, v in sorted(total.items(), key=itemgetter(0))]

def alt1(handle):
    groups = defaultdict(list)
    for row in handle:
        group, value = row.rsplit(None, 2)[-2:]
        groups[group].append(float(value))
    return [np.mean(v) for k, v in sorted(groups.items(), key=itemgetter(0))]

def alt2(handle):
    groups = {str(i): [] for i in range(1, 12)}
    for row in handle:
        key, val = row.rsplit(None, 2)[-2:]
        groups[key].append(float(val))
    return [np.mean(group) for _, group in sorted(groups.items(), key=itemgetter(0))]

def alt3(handle):
    groups = [[] for _ in range(11)]
    for row in handle:
        key, val = row.rsplit(None, 2)[-2:]
        groups[int(key)-1].append(float(val))
    return [np.mean(group) for group in groups if group]
我想不出还有其他重大的优化。让我们看看一些基准:

In [2]: %timeit base(StringIO(data))
1.18 ms ± 36.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [3]: %timeit alt1(StringIO(data))
937 µs ± 30.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [4]: %timeit alt2(StringIO(data))
941 µs ± 30.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [5]: %timeit alt3(StringIO(data))
1.08 ms ± 40.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

这三种备选方案都比最初的实现更快
alt1
alt2
具有相同的性能,并且速度明显更快。你可能想给他们一次机会

你闻对了,我没法做密码faster@L.Bond您的文件有多大?从3到200行您对“慢”和“快”的定义是什么。在纯Python中,要使其显著更快,您所能做的不多。你可以使用pandas,但它会读取整个文件。我看不到任何明显的东西,尽管很多事情可能会有所不同:你正在
float
ing班级编号和分数;您正在推到一个列表的目录,然后在上面迭代-也许一个元组列表可能更快(使用不同的逻辑?)
In [2]: %timeit base(StringIO(data))
1.18 ms ± 36.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [3]: %timeit alt1(StringIO(data))
937 µs ± 30.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [4]: %timeit alt2(StringIO(data))
941 µs ± 30.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [5]: %timeit alt3(StringIO(data))
1.08 ms ± 40.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)