Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/323.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 减少从MongoDB加载大熊猫数据帧时使用的内存_Python_Mongodb_Pandas_Memory - Fatal编程技术网

Python 减少从MongoDB加载大熊猫数据帧时使用的内存

Python 减少从MongoDB加载大熊猫数据帧时使用的内存,python,mongodb,pandas,memory,Python,Mongodb,Pandas,Memory,我有一个大型数据集,包含4000万条记录,总大小约为21.0G,存储在MongoDB中。我花了几个小时把它加载到熊猫数据框中。但总内存大小增加到了28.7G左右(加载前约为600Mb) 考虑到加载此类数据集的时间成本,使用pd.to_csv('localdisk.csv')将其保存到本地磁盘。csv文件的大小为7.1Gb 所以问题是为什么csv文件如此之小,而使用的数据帧(或其他数据?)的内存大小大约是它的4倍,并且有没有更好的解决方案来减少数据帧的内存使用。我有另一个数据集,包含超过1亿个相同

我有一个大型数据集,包含4000万条记录,总大小约为21.0G,存储在MongoDB中。我花了几个小时把它加载到熊猫数据框中。但总内存大小增加到了28.7G左右(加载前约为600Mb)

考虑到加载此类数据集的时间成本,使用pd.to_csv('localdisk.csv')将其保存到本地磁盘。csv文件的大小为7.1Gb

所以问题是为什么csv文件如此之小,而使用的数据帧(或其他数据?)的内存大小大约是它的4倍,并且有没有更好的解决方案来减少数据帧的内存使用。我有另一个数据集,包含超过1亿个相同的项目。不知道我是否能用这样的解决方案载入内存

PS:我认为将数据加载到内存中花费如此多时间的原因是以下三个命令:

temp = pd.DataFrame(dataset, columns=dataset[0].keys())
dataset = []
data = data.append(temp)
数据集中有60000个项目
,加载到
数据
(熊猫数据框)需要约5-10分钟

更新:

用于生成度量的代码 这说明
concat
append
更有效。我还没有测试过

last_time = time.time()
for i in cursor:
    dataset.append(i)
    del i
    count += 1
    if count % 100000 == 0:
        temp = pd.DataFrame(dataset, columns=dataset[0].keys())
        dataset = []
        data = pd.concat([data,temp])
        current_time = time.time()
        cost_time = current_time - last_time
        last_time = current_time
        memory_usage = psutil.virtual_memory().used / (1024**3)
        print("count is {}, cost time is {}, memory usage is {}".format(count, cost_time, memory_usage))
将数据加载到数据帧的度量 更新2

数据规范化代码(小整数和分类)

优化指标:

总内存使用量减少了一倍以上。但是
concat
ing/
append
ing时间变化不大

data length is 37800000,count is 37800000, cost time is 132.23220038414001, memory usage is 11.789329528808594
data length is 37900000,count is 37900000, cost time is 65.34806060791016, memory usage is 11.7882080078125
data length is 38000000,count is 38000000, cost time is 122.15527963638306, memory usage is 11.804153442382812
data length is 38100000,count is 38100000, cost time is 47.79928374290466, memory usage is 11.828723907470703
data length is 38200000,count is 38200000, cost time is 49.70282459259033, memory usage is 11.837543487548828
data length is 38300000,count is 38300000, cost time is 155.42868423461914, memory usage is 11.895767211914062
data length is 38400000,count is 38400000, cost time is 105.94551157951355, memory usage is 11.947330474853516
data length is 38500000,count is 38500000, cost time is 136.1993544101715, memory usage is 12.013351440429688
data length is 38600000,count is 38600000, cost time is 114.5268976688385, memory usage is 12.013912200927734
data length is 38700000,count is 38700000, cost time is 53.31018781661987, memory usage is 12.017452239990234
data length is 38800000,count is 38800000, cost time is 65.94741868972778, memory usage is 12.058589935302734
data length is 38900000,count is 38900000, cost time is 42.62899565696716, memory usage is 12.067787170410156
data length is 39000000,count is 39000000, cost time is 57.95372486114502, memory usage is 11.979434967041016
data length is 39100000,count is 39100000, cost time is 62.12286162376404, memory usage is 12.026973724365234
data length is 39200000,count is 39200000, cost time is 80.76535606384277, memory usage is 12.111717224121094

CSV中的内容和数据帧中的内容是两种截然不同的东西。例如,CSV中的
9.9
9.9999999999999
将在数据帧中占用相同的空间量

也就是说,数据帧中的数据比列表中的数据占用的空间小得多。构建一个列表在内存中是昂贵的;附加到数据帧需要pandas创建一个新的(更大的)数据帧,复制所有内容,然后将原始数据帧留给垃圾收集

如果您预先分配了60000行的数据帧(或者不管您总共有多少行),您可能会做得更好;e、 g:

然后,在不依赖
dataset
list的情况下,为每一行插入该行的数据:

data.values[count,:] = rowdata_at_count
这不是类型安全的,但速度非常快(因为没有分配),因此请确保
rowdata\u at\u count
是一个元素与列类型对应的列表

编辑

concat比append[更]高效


是的,我相信追加100行就像一行中的100个concat(因为每个追加都必须重新分配和复制表,就像concat一样)。预分配避免了追加和合并:表大小不改变,不需要重新分配和复制。

CSV中的内容和数据帧中的内容是两件截然不同的事情。例如,CSV中的
9.9
9.9999999999999
将在数据帧中占用相同的空间量

也就是说,数据帧中的数据比列表中的数据占用的空间小得多。构建一个列表在内存中是昂贵的;附加到数据帧需要pandas创建一个新的(更大的)数据帧,复制所有内容,然后将原始数据帧留给垃圾收集

如果您预先分配了60000行的数据帧(或者不管您总共有多少行),您可能会做得更好;e、 g:

然后,在不依赖
dataset
list的情况下,为每一行插入该行的数据:

data.values[count,:] = rowdata_at_count
这不是类型安全的,但速度非常快(因为没有分配),因此请确保
rowdata\u at\u count
是一个元素与列类型对应的列表

编辑

concat比append[更]高效


是的,我相信追加100行就像一行中的100个concat(因为每个追加都必须重新分配和复制表,就像concat一样)。预分配避免了追加和合并:表格大小不会改变,无需重新分配和复制。

如果您将文件保存到csv,您可以使用参数“memory\u map=True”的“read\u csv”。
您也可以从一开始就选择列的子集,而不是稍后再删除它们,如果只需要几个参数,就可以及时读取这些列。
您可以将文本重复/分类数据转换为虚拟/整数。
如果您可以获得一个完全相同的数据类型表,那么您可能还希望使用Numpy来代替它。
结合使用稀疏矩阵,它可能有助于显著减少内存大小,加快加载和处理速度

对于融合操作,Pandas docs告诉merge,与concat和so相比,merge是“高性能的”(在这些操作之前追加)。

您可能希望使用inplace=True参数来避免复制的负担

如果您将文件保存到csv,您可以使用参数“memory\u map=True”的“read\u csv”。
您也可以从一开始就选择列的子集,而不是稍后再删除它们,如果只需要几个参数,就可以及时读取这些列。
您可以将文本重复/分类数据转换为虚拟/整数。
如果您可以获得一个完全相同的数据类型表,那么您可能还希望使用Numpy来代替它。
结合使用稀疏矩阵,它可能有助于显著减少内存大小,加快加载和处理速度

对于融合操作,Pandas docs告诉merge,与concat和so相比,merge是“高性能的”(在这些操作之前追加)。

您可能希望使用inplace=True参数来避免复制的负担

这个问题通过hdf5和pytables非常有效地解决了

1.定义描述: 2.使用pytables生成hdf5文件 3.循环光标并将数据插入表 4.现在,来自MongoDB的所有数据都存储在本地磁盘的hdf5文件中。最终的h5大小为4.6克。 5.将数据加载到h5的度量。 末日
last_time = time.time()
dtypes = {"somecount":'int32',"somecount":"int32","somecate":"category","somecount":"int32","somecate":"category","somecount":"int32","somecount":"int32","somecate":"category"}
for i in cursor:
    del i['something']
    del i['sometime']
    del i['something']
    del i['something']
    del i['someint']
    dataset.append(i)
    del i
    count += 1
    if count % 100000 == 0:
        temp = pd.DataFrame(dataset,columns=dataset[0].keys())
        temp.fillna(0,inplace=True)
        temp = temp.astype(dtypes, errors="ignore")
        dataset = []
        data = pd.concat([data,temp])
data length is 37800000,count is 37800000, cost time is 132.23220038414001, memory usage is 11.789329528808594
data length is 37900000,count is 37900000, cost time is 65.34806060791016, memory usage is 11.7882080078125
data length is 38000000,count is 38000000, cost time is 122.15527963638306, memory usage is 11.804153442382812
data length is 38100000,count is 38100000, cost time is 47.79928374290466, memory usage is 11.828723907470703
data length is 38200000,count is 38200000, cost time is 49.70282459259033, memory usage is 11.837543487548828
data length is 38300000,count is 38300000, cost time is 155.42868423461914, memory usage is 11.895767211914062
data length is 38400000,count is 38400000, cost time is 105.94551157951355, memory usage is 11.947330474853516
data length is 38500000,count is 38500000, cost time is 136.1993544101715, memory usage is 12.013351440429688
data length is 38600000,count is 38600000, cost time is 114.5268976688385, memory usage is 12.013912200927734
data length is 38700000,count is 38700000, cost time is 53.31018781661987, memory usage is 12.017452239990234
data length is 38800000,count is 38800000, cost time is 65.94741868972778, memory usage is 12.058589935302734
data length is 38900000,count is 38900000, cost time is 42.62899565696716, memory usage is 12.067787170410156
data length is 39000000,count is 39000000, cost time is 57.95372486114502, memory usage is 11.979434967041016
data length is 39100000,count is 39100000, cost time is 62.12286162376404, memory usage is 12.026973724365234
data length is 39200000,count is 39200000, cost time is 80.76535606384277, memory usage is 12.111717224121094
data = pd.DataFrame(np.empty((60000,), dtype=[
    ('x', np.uint8),
    ('y', np.float64)
]))
data.values[count,:] = rowdata_at_count
from tables import *
class Description(IsDescription):
    something1 = StringCol(30)
    somecount1 = Int32Col()
    somecount2 = Int32Col()
    something2 = StringCol(10)
    somecount3 = Int32Col()
    something3 = StringCol(20)
    somecount4 = Int32Col()
    somecount5 = Int32Col()
    something4 = StringCol(29)
    sometime = Time64Col()
h5file = open_file("filename.h5", mode='w', title = "title_of_filename")
group = h5file.create_group("/", 'groupname', 'somethingelse')
table = h5file.create_table(group, 'readout', Description, "Readout example")
particle = table.row
for i in cursor:
    try:

        particle['something1'] = i['something1']
            ...
        particle['sometime'] = i['sometime']
        particle.append()
        count += 1
        if count % 100000 == 0:
            current_time = time.time()
            cost_time = current_time - last_time
            last_time = current_time
            memory_usage = psutil.virtual_memory().used / (1024**3)
            print("count is {}, cost time is {}, memory usage is {}".format( count, cost_time, memory_usage))
    except Exception as e:
        print(e)
        print(i)
        break
count is 100000, cost time is 61.384639501571655, memory usage is 0.6333351135253906
count is 200000, cost time is 1.8020610809326172, memory usage is 0.6135673522949219
count is 300000, cost time is 2.348151206970215, memory usage is 0.6422805786132812
count is 400000, cost time is 1.768083095550537, memory usage is 0.6340789794921875
count is 500000, cost time is 1.7722208499908447, memory usage is 0.6187820434570312
count is 600000, cost time is 2.418192148208618, memory usage is 0.6522865295410156
count is 700000, cost time is 1.8863332271575928, memory usage is 0.6428298950195312
count is 800000, cost time is 1.8162147998809814, memory usage is 0.6209907531738281
count is 900000, cost time is 2.2260451316833496, memory usage is 0.6533966064453125
count is 1000000, cost time is 1.644845962524414, memory usage is 0.6412544250488281
count is 1100000, cost time is 1.7015583515167236, memory usage is 0.6193504333496094
count is 1200000, cost time is 2.2118935585021973, memory usage is 0.6539993286132812
count is 1300000, cost time is 1.732633352279663, memory usage is 0.6396903991699219
count is 1400000, cost time is 1.7652947902679443, memory usage is 0.6167755126953125
count is 1500000, cost time is 2.49992299079895, memory usage is 0.6546707153320312
count is 1600000, cost time is 1.9869158267974854, memory usage is 0.6390419006347656
count is 1700000, cost time is 1.8238599300384521, memory usage is 0.6159439086914062
count is 1800000, cost time is 2.2168307304382324, memory usage is 0.6554222106933594
count is 1900000, cost time is 1.7153246402740479, memory usage is 0.6401138305664062