如何使用spark ML计算pyspark分类模型中的基尼指数?

如何使用spark ML计算pyspark分类模型中的基尼指数?,pyspark,apache-spark-ml,Pyspark,Apache Spark Ml,我试图计算分类模型的基尼指数,该模型使用pyspark ml模型中的GBTClassifier完成。我似乎找不到一个能给出roc_auc_分数的指标,就像python sklearn中的那样 下面是我到目前为止在databricks上使用的代码。我目前正在使用databricks中的数据集 %fs ls databricks-datasets/adult/adult.data from pyspark.sql.functions import * from pyspark.ml.classif

我试图计算分类模型的基尼指数,该模型使用pyspark ml模型中的GBTClassifier完成。我似乎找不到一个能给出roc_auc_分数的指标,就像python sklearn中的那样

下面是我到目前为止在databricks上使用的代码。我目前正在使用databricks中的数据集

%fs ls databricks-datasets/adult/adult.data

from pyspark.sql.functions import *
from pyspark.ml.classification import  RandomForestClassifier, GBTClassifier
from pyspark.ml.feature import StringIndexer, OneHotEncoderEstimator, VectorAssembler, VectorSlicer
from pyspark.ml import Pipeline
from pyspark.ml.evaluation import BinaryClassificationEvaluator,MulticlassClassificationEvaluator
from pyspark.mllib.evaluation import BinaryClassificationMetrics
from pyspark.ml.linalg import Vectors
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit

dataset = spark.table("adult")
# spliting the train and test data frames 
splits = dataset.randomSplit([0.7, 0.3])
train_df = splits[0]
test_df = splits[1]

def churn_predictions(train_df,
                     target_col, 
#                      algorithm, 
#                      model_parameters = conf['model_parameters']
                    ):
  """
  #Function attributes
  dataframe        - training df
  target           - target varibale in the model
  Algorithm        - Algorithm used 
  model_parameters - model parameters used to fine tune the model
  """

  # one hot encoding and assembling
  encoding_var = [i[0] for i in train_df.dtypes if (i[1]=='string') & (i[0]!=target_col)]
  num_var = [i[0] for i in train_df.dtypes if ((i[1]=='int') | (i[1]=='double')) & (i[0]!=target_col)]

  string_indexes = [StringIndexer(inputCol = c, outputCol = 'IDX_' + c, handleInvalid = 'keep') for c in encoding_var]
  onehot_indexes = [OneHotEncoderEstimator(inputCols = ['IDX_' + c], outputCols = ['OHE_' + c]) for c in encoding_var]
  label_indexes = StringIndexer(inputCol = target_col, outputCol = 'label', handleInvalid = 'keep')
  assembler = VectorAssembler(inputCols = num_var + ['OHE_' + c for c in encoding_var], outputCol = "features")
  gbt = GBTClassifier(featuresCol = 'features', labelCol = 'label',
                     maxDepth = 5, 
                     maxBins  = 45,
                     maxIter  = 20)

  pipe = Pipeline(stages = string_indexes + onehot_indexes + [assembler, label_indexes, gbt])
  model = pipe.fit(train_df)

  return model  

gbt_model = churn_predictions(train_df = train_df,
                     target_col = 'income')

#### prediction in test sample ####
gbt_predictions = gbt_model.transform(test_df)
# display(gbt_predictions)
gbt_evaluator = MulticlassClassificationEvaluator(
    labelCol="label", predictionCol="prediction", metricName="accuracy")

accuracy = gbt_evaluator.evaluate(gbt_predictions) * 100
print("Accuracy on test data = %g" % accuracy)

gini_train = 2 * metrics.roc_auc_score(Y, pred_prob) - 1
正如你在最后一行代码中所看到的,显然没有所谓roc_auc_分数的指标来计算基尼


非常感谢你在这方面的帮助

通常使用基尼来评估二元分类模型

您可以通过以下方式在pyspark中计算:

from pyspark.ml.evaluation import BinaryClassificationEvaluator

evaluator = BinaryClassificationEvaluator()
auc = evaluator.evaluate(gbt_predictions, {evaluator.metricName: "areaUnderROC"})
gini = 2 * auc - 1.0