Dask调度程序退出,输出为';死亡';在';ddf.persist()';

Dask调度程序退出,输出为';死亡';在';ddf.persist()';,dask,dask-distributed,Dask,Dask Distributed,我对达斯克很陌生,这可能真的很明显。。。。 我试图运行一个分布式dask设置,其中调度器有1个节点,内存中有足够的工作节点来容纳数据——在这个特殊的例子中,我使用了15个工作节点。我可以很好地启动集群,还可以加载一些数据并对其进行分析 我已将数据复制到工作节点,但客户端计算机上没有可用的数据,因此我延迟加载数据,如下所示: import dask import dask.dataframe as dd from dask import delayed def load_data(path):

我对达斯克很陌生,这可能真的很明显。。。。 我试图运行一个分布式dask设置,其中调度器有1个节点,内存中有足够的工作节点来容纳数据——在这个特殊的例子中,我使用了15个工作节点。我可以很好地启动集群,还可以加载一些数据并对其进行分析

我已将数据复制到工作节点,但客户端计算机上没有可用的数据,因此我延迟加载数据,如下所示:

import dask
import dask.dataframe as dd
from dask import delayed

def load_data(path):
    return dd.read_csv(path)
然后我可以做一些简单的分析:

taxi_df_2016 = delayed(load_data)('/tmp/2016/*.csv').compute()
taxi_df_2016['fare_amount'].mean().compute()
。。。将返回一个值

但是,当我想将文件持久保存在内存中时,调度程序只是停止在控制台上打印
Killed

taxi_df_2016_pers = taxi_df_2016.compute().persist()
将在调度程序节点的控制台上显示:

distributed.scheduler - INFO - Register tcp://10.0.0.17:40385
distributed.scheduler - INFO - Register tcp://10.0.0.6:42847
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.17:40385
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.6:42847
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.9:44627
distributed.scheduler - INFO - Register tcp://10.0.0.7:44419
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.9:44627
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.7:44419
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.16:41907
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.16:41907
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.18:41879
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.18:41879
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.13:32993
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.13:32993
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.8:33265
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.8:33265
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.14:33851
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.14:33851
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.10:44653
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.10:44653
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.19:40201
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.19:40201
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.5:42207
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.5:42207
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.15:36087
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.15:36087
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.12:32827
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.12:32827
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register tcp://10.0.0.11:35405
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.0.0.11:35405
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Clear task state
Killed
当我查看仪表板时,我看到所有173个分区都已成功加载,并且内存已被使用,但在这之后的某个时候,调度程序将死亡


关于如何调试这个有什么想法吗

看起来您正在延迟dask数据帧函数,这似乎很奇怪


请看

我想我知道我做错了什么:我有一个
compute()
太多了
taxi\u df\u 2016\u pers=taxi\u df\u 2016.persist()
工作正常。想知道为什么额外的
compute()
会使调度程序崩溃,以及在这种情况下我如何获得更多调试/日志信息。我延迟加载的原因是文件不在客户端计算机上,而仅在集群上。因此,如果我直接执行
dd.read_csv(“…2016/*.csv”)
,我会得到一个
OSError:…2016/*.csv解析为无文件