Python 如何高效地连接/合并/连接熊猫中的大数据帧?

Python 如何高效地连接/合并/连接熊猫中的大数据帧?,python,pandas,Python,Pandas,目标是创建一个大数据框架,我可以让他们在这个框架上执行操作,例如对列中的每一行求平均值等 问题是,随着数据帧的增加,每次迭代的速度也会增加,因此我无法完成计算 注意:mydf只有两列,其中不需要col1,因此我加入了它col1是字符串,col2是浮点。行数为3k。以下是一个例子: folder_paths float folder/Path 1.12630137 folder/Path2 1.067517426 folder/Path3 1.06443264 folde

目标是创建一个大数据框架,我可以让他们在这个框架上执行操作,例如对列中的每一行求平均值等

问题是,随着数据帧的增加,每次迭代的速度也会增加,因此我无法完成计算

注意:my
df
只有两列,其中不需要
col1
,因此我加入了它
col1
是字符串,
col2
是浮点。行数为3k。以下是一个例子:

folder_paths    float
folder/Path     1.12630137
folder/Path2    1.067517426
folder/Path3    1.06443264
folder/Path4    1.049119625
folder/Path5    1.039635769
问题关于如何提高此代码的效率以及瓶颈在哪里有什么想法?此外,我不确定是否要进行
merge

当前想法我考虑的一个解决方案是分配内存并指定列类型:
col1
是字符串,
col2
是浮点

df = pd.DataFrame() # create an empty data frame

for i in range(1000):
    if i is 0:
        df = generate_new_df(arg1, arg2)
    else:
        df = pd.merge(df, generate_new_df(arg1, arg2), on='col1', how='outer')
我也尝试过使用
pd.concat
,但结果非常相似:每次迭代后都会增加时间

df = pd.concat([df, get_os_is_from_folder(pnlList, sampleSize, randomState)], axis=1)
带有pd.concat的结果

run 1
time 0.34s
run 2    
time 0.34s
run 3    
time 0.32s
run 4    
time 0.33s
run 5    
time 0.42s
run 6    
time 0.41s
run 7    
time 0.45s
run 8    
time 0.46s
run 9    
time 0.54s
run 10   
time 0.58s
run 11   
time 0.73s
run 12   
time 0.72s
run 13   
time 0.79s
run 14   
time 0.87s
run 15   
time 0.95s
run 16   
time 1.06s
run 17   
time 1.19s
run 18   
time 1.24s
run 19   
time 1.37s
run 20   
time 1.57s
run 21   
time 1.68s
run 22   
time 1.93s
run 23   
time 1.86s
run 24   
time 1.96s
run 25   
time 2.11s
run 26   
time 2.32s
run 27   
time 2.42s
run 28   
time 2.57s
使用列表的
dfList
pd.concat
产生了类似的结果。下面是代码和结果

dfList=[]
for i in range(1000):
    dfList.append(generate_new_df(arg1, arg2))

df = pd.concat(dfList, axis=1)
结果:

run 1 took 0.35 sec.
run 2 took 0.26 sec.
run 3 took 0.3 sec.
run 4 took 0.33 sec.
run 5 took 0.45 sec.
run 6 took 0.49 sec.
run 7 took 0.54 sec.
run 8 took 0.51 sec.
run 9 took 0.51 sec.
run 10 took 1.06 sec.
run 11 took 1.74 sec.
run 12 took 1.47 sec.
run 13 took 1.25 sec.
run 14 took 1.04 sec.
run 15 took 1.26 sec.
run 16 took 1.35 sec.
run 17 took 1.7 sec.
run 18 took 1.73 sec.
run 19 took 6.03 sec.
run 20 took 1.63 sec.
run 21 took 1.93 sec.
run 22 took 1.84 sec.
run 23 took 2.25 sec.
run 24 took 2.65 sec.
run 25 took 6.84 sec.
run 26 took 2.88 sec.
run 27 took 2.58 sec.
run 28 took 2.81 sec.
run 29 took 2.84 sec.
run 30 took 2.99 sec.
run 31 took 3.12 sec.
run 32 took 3.48 sec.
run 33 took 3.35 sec.
run 34 took 3.6 sec.
run 35 took 4.0 sec.
run 36 took 4.41 sec.
run 37 took 4.88 sec.
run 38 took 4.92 sec.
run 39 took 4.78 sec.
run 40 took 5.02 sec.
run 41 took 5.32 sec.
run 42 took 5.31 sec.
run 43 took 5.78 sec.
run 44 took 5.77 sec.
run 45 took 6.15 sec.
run 46 took 6.4 sec.
run 47 took 6.84 sec.
run 48 took 7.08 sec.
run 49 took 7.48 sec.
run 50 took 7.91 sec.

现在仍然有点不清楚您的问题到底是什么,但我假设主要的瓶颈是您试图同时将大量数据帧加载到列表中,并且遇到内存/分页问题。考虑到这一点,这里有一种方法可能会有所帮助,但您必须自己测试它,因为我无法访问您的
generate_new_df
函数或您的数据

这种方法是使用
merge\u with_concat
函数的一种变体,最初将较小数量的数据帧合并在一起,然后一次将它们合并在一起

例如,如果您有1000个数据帧,您可以一次将100个数据帧合并在一起,得到10个大数据帧,然后将最后10个数据帧合并在一起作为最后一步。这应该确保在任何一点上都不会有太大的数据帧列表

您可以使用以下两个函数(我假设您的
generate\u new\u df
函数将文件名作为其参数之一)并执行以下操作:

def chunk_dfs(file_names, chunk_size):
    """" yields n dataframes at a time where n == chunksize """
    dfs = []
    for f in file_names:
        dfs.append(generate_new_df(f))
        if len(dfs) == chunk_size:
            yield dfs
            dfs  = []
    if dfs:
        yield dfs


def merge_with_concat(dfs, col):                                             
    dfs = (df.set_index(col, drop=True) for df in dfs)
    merged = pd.concat(dfs, axis=1, join='outer', copy=False)
    return merged.reset_index(drop=False)

col_name = "name_of_column_to_merge_on"
file_names = ['list/of', 'file/names', ...]
chunk_size = 100

merged = merge_with_concat((merge_with_concat(dfs, col_name) for dfs in chunk_dfs(file_names, chunk_size)), col_name)

为什么要合并1000次?每个新的数据帧都来自一个单独的csv文件,所以我用我需要的信息生成了一个df,大约有1k或10k这样的csv文件。这些文件的格式是什么?为什么它们需要合并回去而不是连接起来?正如我之前所写的,我不确定合并是否是最好的方式,所以无论哪种方式更快,完成工作都可以。格式是:列1(字符串)和列2(浮点),每个列大约有3k行。请查看类似问题的答案。它通过
concat
成批启用来模拟合并,这更有效。(免责声明:这是我的答案)