Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/314.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
python中每个函数的内存使用率_Python_Memory_Profiling_Generator - Fatal编程技术网

python中每个函数的内存使用率

python中每个函数的内存使用率,python,memory,profiling,generator,Python,Memory,Profiling,Generator,程序的输出: import time import logging from functools import reduce logging.basicConfig(filename='debug.log', level=logging.DEBUG) def read_large_file(file_object): """Uses a generator to read a large file lazily""" while True: data =

程序的输出:

import time
import logging
from functools import reduce

logging.basicConfig(filename='debug.log', level=logging.DEBUG)



def read_large_file(file_object):
    """Uses a generator to read a large file lazily"""

    while True:
        data = file_object.readline()
        if not data:
            break
        yield data


def process_file_1(file_path):
    """Opens a large file and reads it in"""

    try:
        with open(file_path) as fp:
            for line in read_large_file(fp):
                logging.debug(line)
                pass

    except(IOError, OSError):
        print('Error Opening or Processing file')


    def process_file_2(file_path):
        """Opens a large file and reads it in"""

        try:
            with open(path) as file_handler:
                while True:
                    logging.debug(next(file_handler))
        except (IOError, OSError):
            print("Error opening / processing file")
        except StopIteration:
            pass


    if __name__ == "__main__":
        path = "TB_data_dictionary_2016-04-15.csv"

        l1 = []
        for i in range(1,10):
            start = time.clock()
            process_file_1(path)
            end = time.clock()
            diff = (end - start)
            l1.append(diff)

        avg = reduce(lambda x, y: x + y, l1) / len(l1)
        print('processing time (with generators) {}'.format(avg))


        l2 = []
        for i in range(1,10):
            start = time.clock()
            process_file_2(path)
            end = time.clock()
            diff = (end - start)
            l2.append(diff)

        avg = reduce(lambda x, y: x + y, l2) / len(l2)
        print('processing time (with iterators) {}'.format(avg))
在上面的程序中,我试图测量使用
迭代器
和使用
生成器
打开一个大型文件所需的时间。该文件可用。使用迭代器读取文件的时间远低于使用生成器读取文件的时间


我假设如果我要测量函数
process\u file\u 1
process\u file\u 2
使用的内存量,那么生成器的性能将优于迭代器。在python中是否有一种方法可以测量每个函数的内存使用情况。

首先,使用单个代码迭代来测量其性能不是一个好主意。您的结果可能会因系统性能的任何故障而有所不同(例如:后台进程、cpu执行垃圾收集等)。您应该检查同一代码的多次迭代

要测量代码的性能,请使用模块:

这个模块提供了一种简单的方法来计时Python代码的小段时间。它既有一个命令行界面,也有一个可调用的界面。它避免了测量执行时间的许多常见陷阱

要检查代码的内存消耗量,请使用

这是一个python模块,用于监视进程的内存消耗以及python程序内存消耗的逐行分析


在两次测试之前,执行一次您只需放弃的读取,以确保操作系统对文件的任何缓存都适用于两次运行。@tdelaney-我已稍微更新了程序
C:\Python34\python.exe C:/pypen/data_structures/generators/generators1.py
processing time (with generators) 0.028033358176432314
processing time (with iterators) 0.02699498330810426