在分布式Dask中处理300MB、有30M记录的字符串数据转换
正在节点1上启动Dask计划程序(4CPU,8GB): Dask调度程序:在分布式Dask中处理300MB、有30M记录的字符串数据转换,dask,dask-distributed,dask-delayed,dask-dataframe,Dask,Dask Distributed,Dask Delayed,Dask Dataframe,正在节点1上启动Dask计划程序(4CPU,8GB): Dask调度程序:Dask调度程序--主机0.0.0.0--端口8786 在节点2(8CPU,32GB)和节点3(8CPU,32GB)上启动工作线程: Dask工作人员: dask工作者tcp://http://xxx.xxx.xxx.xxx:8786 --保姆端口3000:3004--工作端口3100:3104--仪表板地址:8789 这是我的原型,编辑了some\u private\u processing和some\u process
Dask调度程序--主机0.0.0.0--端口8786
在节点2(8CPU,32GB)和节点3(8CPU,32GB)上启动工作线程:
Dask工作人员:
dask工作者tcp://http://xxx.xxx.xxx.xxx:8786 --保姆端口3000:3004--工作端口3100:3104--仪表板地址:8789
这是我的原型,编辑了some\u private\u processing
和some\u processing
方法:
import glob
import pandas as pd
from dask.distributed import Client
N_CORES = 16
THREADS_PER_WORKER = 2
dask_cluster = Client(
'127.0.0.1:8786'
)
def get_clean_str1(str1):
ret_tuple = None, False, True, None, False
if not str1:
return ret_tuple
if string_validators(str1) is not True:
return ret_tuple
data = some_processing(str1)
match_flag = False
if str1 == data.get('formated_str1'):
match_flag = True
private_data = some_private_processing(str1)
private_match_flag = False
if str1 == private_data.get('formated_private_str1'):
private_match_flag = True
ret_tuple = str1, match_flag, False, private_str1, private_match_flag
return ret_tuple
files = [
'part-00000-abcd.gz.parquet',
'part-00001-abcd.gz.parquet',
'part-00002-abcd.gz.parquet',
]
print('Starting...')
for idx, each_file in enumerate(files):
dask_cluster.restart()
print(f'Processing file {idx}: {each_file}')
all_str1s_df = pd.read_parquet(
each_file,
engine='pyarrow'
)
print(f'Read file {idx}: {each_file}')
all_str1s_df = dd.from_pandas(all_str1s_df, npartitions=16000)
print(f'Starting file processing {idx}: {each_file}')
str1_res_tuple = all_str1s_df.map_partitions(
lambda part: part.apply(
lambda x: get_clean_str1(x['str1']),
axis=1
),
meta=tuple
)
(clean_str1,
match_flag,
bad_str1_flag,
private_str1,
private_match_flag) = zip(*str1_res_tuple)
all_str1s_df = all_str1s_df.assign(
clean_str1=pd.Series(clean_str1)
)
all_str1s_df = all_str1s_df.assign(
match_flag=pd.Series(match_flag)
)
all_str1s_df = all_str1s_df.assign(
bad_str1_flag=pd.Series(bad_str1_flag)
)
all_str1s_df = all_str1s_df.assign(
private_str1=pd.Series(private_str1)
)
all_str1s_df = all_str1s_df.assign(
private_match_flag=pd.Series(private_match_flag)
)
all_str1s_df = all_str1s_df[
all_str1s_df['match_flag'] == False
]
all_str1s_df = all_str1s_df.repartition(npartitions=200)
all_str1s_df.to_csv(
f'results-str1s-{idx}-*.csv'
)
print(f'Finished file {idx}: {each_file}')
这个处理过程需要8个多小时,我看到所有数据只在Node2或Node3上的一个节点上处理,而不是在Node2和Node3上处理
需要帮助才能理解这些见解,了解我在哪里做错了,使这个简单的数据转换运行了8个多小时,但仍未完成。超时时间增加,内存增加。在那之后,它开始工作,没有失败和悬挂
timeouts:
connect: 180s # time before connecting fails
tcp: 180s # time before calling an unresponsive connection dead