Python 使用RAPIDs冻结的多GPU Kmeans群集

Python 使用RAPIDs冻结的多GPU Kmeans群集,python,k-means,dask,rapids,Python,K Means,Dask,Rapids,我是Python和Rapids.AI的新手,我正在尝试使用Dask和Rapids在多节点GPU(我有2个GPU)中重新创建SKLearn KMeans(我使用Rapids及其docker,docker也安装了Jupyter笔记本) 我在下面展示的代码(我也展示了Iris数据集的一个示例)冻结了,jupyter笔记本单元永远不会结束。我尝试使用%debug魔术键和Dask dashboard,但没有得出任何明确的结论(我认为唯一的结论可能是由于设备\u m_csv.iloc,但我不确定)。另一个可

我是Python和Rapids.AI的新手,我正在尝试使用Dask和Rapids在多节点GPU(我有2个GPU)中重新创建SKLearn KMeans(我使用Rapids及其docker,docker也安装了Jupyter笔记本)

我在下面展示的代码(我也展示了Iris数据集的一个示例)冻结了,jupyter笔记本单元永远不会结束。我尝试使用
%debug
魔术键和Dask dashboard,但没有得出任何明确的结论(我认为唯一的结论可能是由于
设备\u m_csv.iloc
,但我不确定)。另一个可能是我忘记了一些
wait()
compute()
persistent()
(实际上,我不确定在什么情况下应该正确使用它们)

为了更好地阅读,我将解释代码:

  • 首先,你需要进口吗
  • 接下来,从KMeans算法开始(分隔符:##########################……)
  • 创建一个包含2个工作线程的CUDA集群,每个GPU一个(我有2个GPU)和1个工作线程(我读到这是推荐值),然后启动一个客户端
  • 通过2个分区从CSV读取数据集(
    chunksize='2kb'
  • 将以前的数据集拆分为数据(更多称为
    X
    )和标签(更多称为
    y
  • 使用Dask实例化cu_KMeans
  • 符合模型
  • 预测值
  • 检查获得的分数
很抱歉,无法提供更多数据,但我无法获取。无论有什么必要来解决疑问,我都乐意提供

你认为问题出在哪里或什么地方

事先非常感谢

%%time

# Import libraries and show its versions
import numpy as np; print('NumPy Version:', np.__version__)
import pandas as pd; print('Pandas Version:', pd.__version__)
import sklearn; print('Scikit-Learn Version:', sklearn.__version__)
import nvstrings, nvcategory
import cupy; print('cuPY Version:', cupy.__version__)
import cudf; print('cuDF Version:', cudf.__version__)
import cuml; print('cuML Version:', cuml.__version__)
import dask; print('Dask Version:', dask.__version__)
import dask_cuda; print('DaskCuda Version:', dask_cuda.__version__)
import dask_cudf; print('DaskCuDF Version:', dask_cudf.__version__)
import matplotlib; print('MatPlotLib Version:', matplotlib.__version__)
import seaborn as sns; print('SeaBorn Version:', sns.__version__)
#import timeimport warnings

from dask import delayed
import dask.dataframe as dd
from dask.distributed import Client, LocalCluster, wait
from dask_ml.cluster import KMeans as skmKMeans
from dask_cuda import LocalCUDACluster

from sklearn import metrics
from sklearn.cluster import KMeans as skKMeans
from sklearn.metrics import adjusted_rand_score as sk_adjusted_rand_score, silhouette_score as sk_silhouette_score
from cuml.cluster import KMeans as cuKMeans
from cuml.dask.cluster.kmeans import KMeans as cumKMeans
from cuml.metrics import adjusted_rand_score as cu_adjusted_rand_score

# Configure matplotlib library
import matplotlib.pyplot as plt
%matplotlib inline

# Configure seaborn library
sns.set()
#sns.set(style="white", color_codes=True)
%config InlineBackend.figure_format = 'svg'

# Configure warnings
#warnings.filterwarnings("ignore")


####################################### KMEANS #############################################################
# Create local cluster
cluster = LocalCUDACluster(n_workers=2, threads_per_worker=1)
client = Client(cluster)

# Identify number of workers
n_workers = len(client.has_what().keys())

# Read data in host memory
device_m_csv = dask_cudf.read_csv('./DataSet/iris.csv', header = 0, delimiter = ',', chunksize='2kB') # Get complete CSV. Chunksize is 2kb for getting 2 partitions
#x = host_data.iloc[:, [0,1,2,3]].values
device_m_data = device_m_csv.iloc[:, [0, 1, 2, 3]] # Get data columns
device_m_labels = device_m_csv.iloc[:, 4] # Get labels column

# Plot data
#sns.pairplot(device_csv.to_pandas(), hue='variety');

# Define variables
label_type = { 'Setosa': 1, 'Versicolor': 2, 'Virginica': 3 } # Dictionary of variables type

# Create KMeans
cu_m_kmeans = cumKMeans(init = 'k-means||',
                     n_clusters = len(device_m_labels.unique()),
                     oversampling_factor = 40,
                     random_state = 0)
# Fit data in KMeans
cu_m_kmeans.fit(device_m_data)

# Predict data
cu_m_kmeans_labels_predicted = cu_m_kmeans.predict(device_m_data).compute()

# Check score
#print('Cluster centers:\n',cu_m_kmeans.cluster_centers_)
#print('adjusted_rand_score: ', sk_adjusted_rand_score(device_m_labels, cu_m_kmeans.labels_))
#print('silhouette_score: ', sk_silhouette_score(device_m_data.to_pandas(), cu_m_kmeans_labels_predicted))

# Close local cluster
client.close()
cluster.close()
Iris数据集示例:


编辑1 @Corey,这是我用你的代码输出的:

NumPy Version: 1.17.5
Pandas Version: 0.25.3
Scikit-Learn Version: 0.22.1
cuPY Version: 6.7.0
cuDF Version: 0.12.0
cuML Version: 0.12.0
Dask Version: 2.10.1
DaskCuda Version: 0+unknown
DaskCuDF Version: 0.12.0
MatPlotLib Version: 3.1.3
SeaBorn Version: 0.10.0
Cluster centers:
           0         1         2         3
0  5.006000  3.428000  1.462000  0.246000
1  5.901613  2.748387  4.393548  1.433871
2  6.850000  3.073684  5.742105  2.071053
adjusted_rand_score:  0.7302382722834697
silhouette_score:  0.5528190123564102

我稍微修改了您的可复制示例,并能够在最近的RAPIDS夜间生成输出

这是脚本的输出

(cuml_dev_2) cjnolet@deeplearn ~ $ python ~/kmeans_mnmg_reproduce.py 
NumPy Version: 1.18.1
Pandas Version: 0.25.3
Scikit-Learn Version: 0.22.2.post1
cuPY Version: 7.2.0
cuDF Version: 0.13.0a+3237.g61e4d9c
cuML Version: 0.13.0a+891.g4f44f7f
Dask Version: 2.11.0+28.g10db6ba
DaskCuda Version: 0+unknown
DaskCuDF Version: 0.13.0a+3237.g61e4d9c
MatPlotLib Version: 3.2.0
SeaBorn Version: 0.10.0
/share/software/miniconda3/envs/cuml_dev_2/lib/python3.7/site-packages/dask/array/random.py:27: FutureWarning: dask.array.random.doc_wraps is deprecated and will be removed in a future version
  FutureWarning,
/share/software/miniconda3/envs/cuml_dev_2/lib/python3.7/site-packages/distributed/dashboard/core.py:79: UserWarning: 
Port 8787 is already in use. 
Perhaps you already have a cluster running?
Hosting the diagnostics dashboard on a random port instead.
  warnings.warn("\n" + msg)
bokeh.server.util - WARNING - Host wildcard '*' will allow connections originating from multiple (or possibly all) hostnames or IPs. Use non-wildcard values to restrict access explicitly
Cluster centers:
           0         1         2         3
0  5.883607  2.740984  4.388525  1.434426
1  5.006000  3.428000  1.462000  0.246000
2  6.853846  3.076923  5.715385  2.053846
adjusted_rand_score:  0.7163421126838475
silhouette_score:  0.5511916046195927
下面是生成此输出的修改脚本:

#导入库并显示其版本
将numpy作为np导入;打印('numpy版本:',np.\u版本\uuuuu)
将熊猫作为pd导入;打印('pandas Version:',pd.\uuuuu Version\uuuu)
导入sklearn;打印('Scikit-Learn版本:',sklearn.\uuuu版本\uuuu)
导入NVString,nvcategory
导入cupy;打印('cupy版本:',cupy.\uuuuuuu版本\uuuuuu)
导入cudf;打印('cudf版本:',cudf.\uuu版本\uuuuu)
导入cuml;打印('cuml版本:',cuml.\u版本\uuuu)
导入dask;打印('dask版本:',dask.\uuu版本\uuuu)
导入DaskCuda;打印('DaskCuda版本:',DaskCuda.\UU版本\UUUU)
导入dask_cudf;打印('DaskCuDF版本:',dask_cudf.\uuuuuuuuuuuuuuuuuu版本)
导入matplotlib;打印('matplotlib版本:',matplotlib.\uuuuuuuu版本\uuuuuu)
将seaborn作为sns导入;打印('seaborn版本:',sns.\u版本\uuuuuu)
#导入时间导入警告
从dask导入延迟
将dask.dataframe作为dd导入
从dask.distributed import Client,LocalCluster,等待
从dask_ml.cluster导入KMeans作为skmKMeans
从dask_cuda导入本地CudCluster
从SKM学习导入度量
从sklearn.cluster将KMeans导入为skKMeans
从sklearn.metrics导入调整后的_rand_分数作为sk_调整后的_rand_分数,将剪影_分数作为sk_剪影_分数
从cuml.cluster将KMeans导入为cuKMeans
从cuml.dask.cluster.kmeans将kmeans作为cumKMeans导入
从cuml.metrics导入调整后的随机分数作为cu调整后的随机分数
#配置matplotlib库
将matplotlib.pyplot作为plt导入
#配置seaborn库
sns.set()
#sns.set(style=“白色”,颜色代码=真)
#配置警告
#警告。过滤器警告(“忽略”)
#######################################KMEANS#############################################################
#创建本地集群
cluster=LocalCudCluster(n_工作者=2,每个工作者线程=1)
客户端=客户端(群集)
#确定工人人数
n_workers=len(client.has_what().keys())
#读取主机内存中的数据
从sklearn.dataset导入加载
loader=load_iris()
#x=host_data.iloc[:,[0,1,2,3]]。值
device_m_data=dask_cudf.from_cudf(cudf.from_pandas(pd.DataFrame(loader.data)),npartitions=2)#获取数据列
device_m_labels=dask_cudf.from_cudf(cudf.from_pandas(pd.DataFrame(loader.target)),npartitions=2)
#绘图数据
#sns.pairplot(设备)_csv.to_pandas(),hue='variation');
#定义变量
label_type={'Setosa':1,'Versicolor':2,'Virginica':3}#变量字典类型
#创建KMeans
cuum|m|kmeans=cumKMeans(init='k-means | |',
n_clusters=len(np.unique(loader.target)),
过采样系数=40,
随机_状态=0)
#在KMeans中拟合数据
cu_m_kmeans.fit(设备数据)
#预测数据
cu_m_kmeans_labels_predicted=cu_m_kmeans.predict(设备数据).compute()
#检查分数
打印('Cluster centers:\n',cu\m\u kmeans.Cluster\u centers\u)
打印('adjusted_rand_score:'、sk_adjusted_rand_score(loader.target、cu_m_kmeans_labels_predicted.values.get()))
打印('剪影评分:',SKU剪影评分(设备数据.计算().到熊猫(),cu\m\U kmeans\U标签)
#封闭本地集群
client.close()
cluster.close()
您能提供这些库版本的输出吗?我建议您也运行修改后的脚本,看看它是否能为您成功运行。如果不能,我们可以进一步深入了解它是与Docker相关、与RAPIDS版本相关还是其他相关


如果您可以访问运行Jupyter笔记本的命令提示符,那么在构建
KMeans
对象时,通过传入
verbose=True
来启用日志记录可能会有所帮助。这可以帮助我们找出问题的症结所在。

Dask文档确实很好而且内容广泛,尽管我承认有一些问题es t