Python 长文本文件(1.86亿行)在解析为表格格式时会占用太多空间

Python 长文本文件(1.86亿行)在解析为表格格式时会占用太多空间,python,pandas,parsing,text,tabular,Python,Pandas,Parsing,Text,Tabular,我正在Python3.7中运行一个模拟,输出一个日志文件。此日志文件包含我要提取的4列信息(“秩”、“粒子”、“距离”、“时间”),但是,该文件太长(~1.86亿行),如果没有内存爆炸,无法转换为表 日志文件中有很多信息是多余的(即我不想要的很多行)。这些数据代表了与木星有近距离接触的测试体,我只想取任何粒子接触路径的最近点(因此当距离最小化时) 我想知道如何顺序解析整个数组,每次加载并随后关闭行的子集,并确定应该删除哪些行。这样可以避免内存错误 以下是日志文件的示例: > INFO:ro

我正在Python3.7中运行一个模拟,输出一个日志文件。此日志文件包含我要提取的4列信息(“秩”、“粒子”、“距离”、“时间”),但是,该文件太长(~1.86亿行),如果没有内存爆炸,无法转换为表

日志文件中有很多信息是多余的(即我不想要的很多行)。这些数据代表了与木星有近距离接触的测试体,我只想取任何粒子接触路径的最近点(因此当距离最小化时)

我想知道如何顺序解析整个数组,每次加载并随后关闭行的子集,并确定应该删除哪些行。这样可以避免内存错误

以下是日志文件的示例:

> INFO:root:Rank: 9; Particle: 11; Distance: 0.9091072240849053; Time:
> -16.313304965974524 INFO:root:Rank: 9; Particle: 12; Distance: 1.0044817868831895; Time: -16.313304965974524 INFO:root:Rank: 9; Particle: 11; Distance: 0.908626047054527; Time: -16.313713653638327
> INFO:root:Rank: 9; Particle: 12; Distance: 1.0039465102430458; Time:
> -16.313713653638327 INFO:root:Rank: 9; Particle: 11; Distance: 0.9080831675466843; Time: -16.31417484234347 INFO:root:Rank: 9; Particle: 12; Distance: 1.003342787368617; Time: -16.31417484234347
> INFO:root:Rank: 9; Particle: 11; Distance: 0.9075612522257289; Time:
> -16.314618315598103 INFO:root:Rank: 9; Particle: 12; Distance: 1.0027625719975715; Time: -16.314618315598103 INFO:root:Rank: 9; Particle: 11; Distance: 0.9071397102705921; Time: -16.3149765686745
> INFO:root:Rank: 9; Particle: 12; Distance: 1.0022940809354668; Time:
> -16.3149765686745 INFO:root:Rank: 9; Particle: 17; Distance: 1.0138064947281393; Time: -16.3149765686745 INFO:root:Rank: 9; Particle: 11; Distance: 0.9068825428781885; Time: -16.31519515543922
> INFO:root:Rank: 9; Particle: 12; Distance: 1.0020083325953948; Time:
> -16.31519515543922 INFO:root:Rank: 9; Particle: 17; Distance: 1.013519683237125; Time: -16.31519515543922 INFO:root:Rank: 9; Particle: 11; Distance: 0.9094533423012789; Time: -16.31301103889693
> INFO:root:Rank: 9; Particle: 12; Distance: 1.004866919381637; Time:
> -16.31301103889693 INFO:root:Rank: 9; Particle: 11; Distance: 0.9091072240849053; Time: -16.313304965974524 INFO:root:Rank: 9; Particle: 12; Distance: 1.0044817868831895; Time: -16.313304965974524
> INFO:root:Rank: 9; Particle: 11; Distance: 0.908626047054527; Time:
> -16.313713653638327 INFO:root:Rank: 9; Particle: 12; Distance: 1.0039465102430458; Time: -16.313713653638327 INFO:root:Rank: 9; Particle: 11; Distance: 0.9080831675466843; Time: -16.31417484234347
> INFO:root:Rank: 9; Particle: 12; Distance: 1.003342787368617; Time:
> -16.31417484234347 INFO:root:Rank: 9; Particle: 11; Distance: 0.9075612522257289; Time: -16.314618315598103 INFO:root:Rank: 9; Particle: 12; Distance: 1.0027625719975715; Time: -16.314618315598103
> INFO:root:Rank: 9; Particle: 11; Distance: 0.9071397102705921; Time:
> -16.3149765686745 INFO:root:Rank: 9; Particle: 12; Distance: 1.0022940809354668; Time: -16.3149765686745 INFO:root:Rank: 9; Particle: 17; Distance: 1.0138064947281393; Time: -16.3149765686745
> INFO:root:Rank: 9; Particle: 11; Distance: 0.9068825428781885; Time:
> -16.31519515543922 INFO:root:Rank: 9; Particle: 12; Distance: 1.0020083325953948; Time: -16.31519515543922 INFO:root:Rank: 9; Particle: 17; Distance: 1.013519683237125; Time: -16.31519515543922
> INFO:root:Rank: 9; Particle: 11; Distance: 0.9068198463831555; Time:
> -16.31524844951857 INFO:root:Rank: 9; Particle: 12; Distance: 1.0019386751793453; Time: -16.31524844951857 INFO:root:Rank: 9; Particle: 17; Distance: 1.0134497671630922; Time: -16.31524844951857
> INFO:root:Rank: 9; Particle: 11; Distance: 0.9066701792148222; Time:
> -16.315375676922567 INFO:root:Rank: 9; Particle: 12; Distance: 1.00177240223002; Time: -16.315375676922567 INFO:root:Rank: 9; Particle: 17; Distance: 1.013282877600642; Time: -16.315375676922567
> INFO:root:Rank: 9; Particle: 11; Distance: 0.9063404096803097; Time:
> -16.315656030600657 INFO:root:Rank: 9; Particle: 12; Distance: 1.0014060996373213; Time: -16.315656030600657 INFO:root:Rank: 9; Particle: 15; Distance: 1.0137165581155958; Time: -16.315656030600657
> INFO:root:Rank: 9; Particle: 17; Distance: 1.012915220608835; Time:
> -16.315656030600657 INFO:root:Rank: 9; Particle: 11; Distance: 0.9058819575130683; Time: -16.316045845280794 INFO:root:Rank: 9; Particle: 12; Distance: 1.000896985053485; Time: -16.316045845280794
> INFO:root:Rank: 9; Particle: 15; Distance: 1.0132054747127601; Time:
> -16.316045845280794 INFO:root:Rank: 9; Particle: 17; Distance: 1.0124042327584963; Time: -16.316045845280794 INFO:root:Rank: 9; Particle: 11; Distance: 0.9053647124033892; Time: -16.316485736531497
> INFO:root:Rank: 9; Particle: 12; Distance: 1.000322757426278; Time:
> -16.316485736531497 INFO:root:Rank: 9; Particle: 15; Distance: 1.0126290399058455; Time: -16.316485736531497 INFO:root:Rank: 9; Particle: 17; Distance: 1.0118279051195338; Time: -16.316485736531497
> INFO:root:Rank: 9; Particle: 11; Distance: 0.9048674370339668; Time:
> -16.31690873042198 INFO:root:Rank: 9; Particle: 12; Distance: 0.9997708766377388; Time: -16.31690873042198 INFO:root:Rank: 9; Particle: 15; Distance: 1.012075051289847; Time: -16.31690873042198
> INFO:root:Rank: 9; Particle: 17; Distance: 1.011274018895163; Time:
> -16.31690873042198 INFO:root:Rank: 9; Particle: 11; Distance: 0.9044657930933018; Time: -16.317250439557714 INFO:root:Rank: 9; Particle: 12; Distance: 0.9993252554048654; Time: -16.317250439557714
下面是最初编写的将其转换为表格的内容(在我意识到它有多长之前):


旁注:文件查看器GUI说文件是26MB,但在终端中使用du命令时,文件实际上是16GB!不确定GUI为什么会出错?

您可以将for循环(
for line in line
)包装到另一个for循环(
for i in range(x)
(其中x是要将行分隔成的块数)中,然后在
行[:x]

比如:

for i in range(1000): # separate lines into 1000 chunks
    for line in lines[::1000]: # select every 1000th value in lines
        # do stuff here
        yield df # if this is what you want to do (see below)

然后,如果您想返回数据帧,您将为每个块生成构造的数据帧,并在函数外部一次处理一个块的数据帧。

我将使用
dask
,大数据工具
pandas
(注意:我重命名了一些对象,因为您不应该使用
index
time
等名称,因为它们可能会与内置对象混淆):


一旦数据被读入,你到底想做什么呢?你可以一行一行地在文件中循环处理每一行,以得到你想要的结果。有没有一个原因使整个文件必须一次读入内存?嗯,我肯定不需要一次读完所有的文件。我知道我需要通过comp迭代删除行将它们彼此分开,只取与最近的接近相对应的行(距离最小)对于每个粒子。我只是不知道如何在不使用太多内存的情况下保存到修改原始粒子的新表时做到这一点。一个186M行的文件怎么可能只有30MB?它至少需要186M的换行符。为什么不循环通过每行,存储每个粒子的距离,每次为t输入新行帽子粒子,检查它的距离是否更近,如果是,则替换存储的距离,如果不是,则继续循环?…最后,每个粒子只存储1行,这是距离最近的,您应该运行文件系统检查。
for i in range(1000): # separate lines into 1000 chunks
    for line in lines[::1000]: # select every 1000th value in lines
        # do stuff here
        yield df # if this is what you want to do (see below)
import dask.dataframe as dd
logfile = 'Desktop\dd.txt'
df = dd.read_csv(logfile, header=None)
df

def ce_log_to_table(df):    
    ranks = []
    indices = []
    distances = []
    times = []

    for line in df[0]:
        rnk = re.search('(?!Rank: )[0-9]*(?=; P)', line)
        idx = re.search('(?!Particle: )[0-9]*(?=; D)', line)
        dstnc = re.search('(?!Distance: )[0-9.0-9]*(?=; T)', line)
        t = re.search('(?!Time: )-[0-9.0-9]*', line)

        ranks.append(rnk[0])
        indices.append(idx[0])
        distances.append(dstnc[0])
        times.append(t[0])

    ce_dict = {'rank': ranks, 'index': indices, 'distance': distances, 'time': times}
    df = pd.DataFrame(ce_dict)
    return df


ce_log_to_table(df).to_csv('dask_test.txt')