Python 3.x 如何使用dask+;为NFS文件分发?

Python 3.x 如何使用dask+;为NFS文件分发?,python-3.x,distributed-computing,dask,Python 3.x,Distributed Computing,Dask,使用Dask从分布式数据帧开始,我试图在集群中分发一些汇总统计计算。使用dcluster…设置集群可以正常工作。在笔记本里 import dask.dataframe as dd from distributed import Executor, progress e = Executor('...:8786') df = dd.read_csv(...) 我正在读取的文件位于所有工作计算机都可以访问的NFS挂载上。在这一点上,我可以看看df.head(),例如,一切看起来都是正确的。从博客

使用Dask从分布式数据帧开始,我试图在集群中分发一些汇总统计计算。使用
dcluster…
设置集群可以正常工作。在笔记本里

import dask.dataframe as dd
from distributed import Executor, progress
e = Executor('...:8786')

df = dd.read_csv(...)
我正在读取的文件位于所有工作计算机都可以访问的NFS挂载上。在这一点上,我可以看看
df.head()
,例如,一切看起来都是正确的。从博客帖子中,我认为我应该能够做到:

df_future = e.persist(df)
progress(df_future)
# ... wait for everything to load ...
df_future.head()
但这是一个错误:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-26-8d59adace8bf> in <module>()
----> 1 fraudf.head()

/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/dataframe/core.py in head(self, n, compute)
    358 
    359         if compute:
--> 360             result = result.compute()
    361         return result
    362 

/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/base.py in compute(self, **kwargs)
     35 
     36     def compute(self, **kwargs):
---> 37         return compute(self, **kwargs)[0]
     38 
     39     @classmethod

/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/base.py in compute(*args, **kwargs)
    108                 for opt, val in groups.items()])
    109     keys = [var._keys() for var in variables]
--> 110     results = get(dsk, keys, **kwargs)
    111 
    112     results_iter = iter(results)

/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/threaded.py in get(dsk, result, cache, num_workers, **kwargs)
     55     results = get_async(pool.apply_async, len(pool._pool), dsk, result,
     56                         cache=cache, queue=queue, get_id=_thread_get_id,
---> 57                         **kwargs)
     58 
     59     return results

/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/async.py in get_async(apply_async, num_workers, dsk, result, cache, queue, get_id, raise_on_exception, rerun_exceptions_locally, callbacks, **kwargs)
    479                 _execute_task(task, data)  # Re-execute locally
    480             else:
--> 481                 raise(remote_exception(res, tb))
    482         state['cache'][key] = res
    483         finish_task(dsk, key, state, results, keyorder.get)

AttributeError: 'Future' object has no attribute 'head'

Traceback
---------
  File "/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/async.py", line 264, in execute_task
    result = _execute_task(task, data)
  File "/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/async.py", line 246, in _execute_task
    return func(*args2)
  File "/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/dataframe/core.py", line 354, in <lambda>
    dsk = {(name, 0): (lambda x, n: x.head(n=n), (self._name, 0), n)}
---------------------------------------------------------------------------
AttributeError回溯(最近一次呼叫上次)
在()
---->1.首长()
/head中的work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/dataframe/core.py(self,n,compute)
358
359如果计算:
-->360 result=result.compute()
361返回结果
362
/计算中的work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/base.py(self,**kwargs)
35
36 def计算(自身,**kwargs):
--->37返回计算(自身,**kwargs)[0]
38
39@classmethod
/计算中的work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/base.py(*args,**kwargs)
108对于opt,组中的val.items())
109个键=[var.\u keys()表示变量中的变量]
-->110结果=获取(dsk、键、**kwargs)
111
112结果\u iter=iter(结果)
/get中的work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/threaded.py(dsk、结果、缓存、num_workers,**kwargs)
55 results=get_async(pool.apply_async)、len(pool.\u pool)、dsk、result、,
56 cache=cache,queue=queue,get\u id=\u thread\u get\u id,
--->57**夸尔格)
58
59返回结果
/get_async中的work/analytics2/analytics/python/envs/analytics/lib/python3.5/site-packages/dask/async.py(apply_async、num_workers、dsk、result、cache、queue、get_id、引发_异常、在本地重新运行_异常、回调、**kwargs)
479 _执行_任务(任务、数据)#在本地重新执行
480其他:
-->481上升(远程_异常(res、tb))
482状态['cache'][key]=res
483完成任务(dsk、键、状态、结果、keyorder.get)
AttributeError:“Future”对象没有属性“head”
回溯
---------
文件“/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site packages/dask/async.py”,第264行,在执行任务中
结果=_执行_任务(任务、数据)
文件“/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site packages/dask/async.py”,第246行,在执行任务中
返回函数(*args2)
文件“/work/analytics2/analytics/python/envs/analytics/lib/python3.5/site packages/dask/dataframe/core.py”,第354行,在
dsk={(名称,0):(lambda x,n:x.head(n=n),(self.\u name,0,n)}

当数据帧来自普通文件系统而不是HDFS时,分发数据帧的正确方法是什么?

Dask正在尝试使用单机调度程序,如果使用普通Dask库创建数据帧,这是默认的。切换默认值以使用具有以下行的群集:

import dask
dask.set_options(get=e.get)