Dask-删除重复索引内存错误

Dask-删除重复索引内存错误,dask,Dask,当我试图用下面的代码在一个大数据帧上删除重复的时间戳时,我得到了一个内存错误 import dask.dataframe as dd path = f's3://{container_name}/*' ddf = dd.read_parquet(path, storage_options=opts, engine='fastparquet') ddf = ddf.reset_index().drop_duplicates(subset='timestamp_utc').set_index('t

当我试图用下面的代码在一个大数据帧上删除重复的时间戳时,我得到了一个
内存错误

import dask.dataframe as dd

path = f's3://{container_name}/*'
ddf = dd.read_parquet(path, storage_options=opts, engine='fastparquet')
ddf = ddf.reset_index().drop_duplicates(subset='timestamp_utc').set_index('timestamp_utc')
...
分析显示,它在一个265MB的gzip拼花文件数据集上使用了大约14GB的RAM,其中包含大约4000万行数据

有没有其他方法可以在不使用Dask的情况下,在数据上删除重复索引

下面的回溯

Traceback (most recent call last):
  File "/anaconda/envs/surb/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/anaconda/envs/surb/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/chengkai/surbana_lift/src/consolidate_data.py", line 62, in <module>
    consolidate_data()
  File "/home/chengkai/surbana_lift/src/consolidate_data.py", line 37, in consolidate_data
    ddf = ddf.reset_index().drop_duplicates(subset='timestamp_utc').set_index('timestamp_utc')
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/dataframe/core.py", line 2524, in set_index
    divisions=divisions, **kwargs)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/dataframe/shuffle.py", line 64, in set_index
    divisions, sizes, mins, maxes = base.compute(divisions, sizes, mins, maxes)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/base.py", line 407, in compute
    results = get(dsk, keys, **kwargs)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/threaded.py", line 75, in get
    pack_exception=pack_exception, **kwargs)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 521, in get_async
    raise_exception(exc, tb)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/compatibility.py", line 67, in reraise
    raise exc
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 290, in execute_task
    result = _execute_task(task, data)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 270, in _execute_task
    args2 = [_execute_task(a, cache) for a in args]
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 270, in <listcomp>
    args2 = [_execute_task(a, cache) for a in args]
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 267, in _execute_task
    return [_execute_task(a, cache) for a in arg]
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 267, in <listcomp>
    return [_execute_task(a, cache) for a in arg]
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/local.py", line 271, in _execute_task
    return func(*args2)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/dataframe/core.py", line 69, in _concat
    return args[0] if not args2 else methods.concat(args2, uniform=True)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/dask/dataframe/methods.py", line 329, in concat
    out = pd.concat(dfs3, join=join)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 226, in concat
    return op.get_result()
  File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/reshape/concat.py", line 423, in get_result
    copy=self.copy)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/internals.py", line 5418, in concatenate_block_manage
rs
    [ju.block for ju in join_units], placement=placement)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/internals.py", line 2984, in concat_same_type
    axis=self.ndim - 1)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/dtypes/concat.py", line 461, in _concat_datetime
    return _concat_datetimetz(to_concat)
  File "/anaconda/envs/surb/lib/python3.6/site-packages/pandas/core/dtypes/concat.py", line 506, in _concat_datetimetz
    new_values = np.concatenate([x.asi8 for x in to_concat])
MemoryError
回溯(最近一次呼叫最后一次):
文件“/anaconda/envs/surb/lib/python3.6/runpy.py”,第193行,在“运行”模块中作为“主”
“\uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
文件“/anaconda/envs/surb/lib/python3.6/runpy.py”,第85行,在运行代码中
exec(代码、运行\全局)
文件“/home/chengkai/surbana_lift/src/consolidation_data.py”,第62行,在
合并数据()
文件“/home/chengkai/surbana_lift/src/consolidation_data.py”,第37行,合并_数据
ddf=ddf.reset_index().删除重复项(子集='timestamp_utc')。设置_索引('timestamp_utc'))
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/dataframe/core.py”,第2524行,在set_索引中
分区=分区,**kwargs)
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/dataframe/shuffle.py”,第64行,在set_索引中
分段、大小、分钟、最大值=base.compute(分段、大小、分钟、最大值)
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/base.py”,第407行,在compute中
结果=获取(dsk、键、**kwargs)
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/threaded.py”,get中的第75行
包装例外=包装例外,**kwargs)
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/local.py”,第521行,在get_async中
raise_异常(exc、tb)
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/compatibility.py”,第67行,在reraise中
加薪
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/local.py”,第290行,在执行任务中
结果=_执行_任务(任务、数据)
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/local.py”,第270行,在执行任务中
args2=[[为args中的a执行任务(a,缓存)]
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/local.py”,第270行,在
args2=[[为args中的a执行任务(a,缓存)]
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/local.py”,第267行,在执行任务中
返回[_执行_任务(a,缓存)以获取参数中的a]
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/local.py”,第267行,在
返回[_执行_任务(a,缓存)以获取参数中的a]
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/local.py”,第271行,在执行任务中
返回函数(*args2)
文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/dataframe/core.py”,第69行,in_concat
如果不是args2,则返回args[0]else methods.concat(args2,uniform=True)
concat中的文件“/anaconda/envs/surb/lib/python3.6/site packages/dask/dataframe/methods.py”,第329行
out=pd.concat(dfs3,join=join)
concat中的文件“/anaconda/envs/surb/lib/python3.6/site packages/pandas/core/restrape/concat.py”,第226行
返回操作获取结果()
文件“/anaconda/envs/surb/lib/python3.6/site packages/pandas/core/restrape/concat.py”,第423行,在get_result中
复制=自我复制)
文件“/anaconda/envs/surb/lib/python3.6/site packages/pandas/core/internals.py”,第5418行,连接块管理
rs
[ju.block用于连接单元中的ju],放置=放置)
文件“/anaconda/envs/surb/lib/python3.6/site packages/pandas/core/internals.py”,第2984行,concat类型相同
轴=self.ndim-1)
文件“/anaconda/envs/surb/lib/python3.6/site packages/pandas/core/dtypes/concat.py”,第461行,in_concat_datetime
返回_concat_datetimetz(到_concat)
文件“/anaconda/envs/surb/lib/python3.6/site packages/pandas/core/dtypes/concat.py”,第506行,in_concat_datetimetz
新的\u值=np.连接([x.asi8表示x in到\u concat])
记忆者

数据在内存中变得非常大并不令人惊讶。就空间而言,Parquet是一种非常有效的格式,特别是在gzip压缩的情况下,字符串都成为python对象(内存非常昂贵)

此外,您还有许多工作线程在整个数据帧的某些部分上运行。这涉及到数据复制、中间和结果连接;后者在熊猫身上效率很低

一个建议是:您可以通过指定
index=False
read\u parquet
来删除一个步骤,而不是
reset\u index

下一个建议:将您使用的线程数限制为比默认值(可能是您的CPU内核数)更小的数目。实现这一点的最简单方法是使用分布式客户机进程内

from dask.distributed import Client
c = Client(processes=False, threads_per_worker=4)
最好先设置索引,然后使用
map\u partitions
执行drop\u duplicated,以最小化跨分区通信

df.map_partitions(lambda d: d.drop_duplicates(subset='timestamp_utc'))

在使用
repartition(freq='W')
对数据帧进行重新分区后,我现在可以通过使用
map_partition
减少工人数量和删除重复项来运行计算。谢谢。@leeschengkai如果这解决了你的问题,请投票并最终接受这个答案。你能不能用如何使用
map\u partition
的示例代码更新这个问题或答案?@mdurant我正在做。在80GB的数据帧上删除重复项(split\u out=n),但我总是遇到内存错误。我注意到做简单的拖放复制总是创建一个结果分区,这是我不想要的。不过,我的所有分区都已被重复数据消除。我有64GB的ram和两个25GB的工作线程在600个分区上执行任务。map_分区只会在每个分区中放置重复项,而不会添加重复项。知道该怎么做吗?您可能想将索引设置为要对其进行重复数据消除的列。这会导致混乱/排序,但意味着您可以跨分区并行操作。