Machine learning 如何评估XGBOOST概率模型?

Machine learning 如何评估XGBOOST概率模型?,machine-learning,deep-learning,data-science,xgboost,supervised-learning,Machine Learning,Deep Learning,Data Science,Xgboost,Supervised Learning,下面是带有XGBOOST预测的验证表示例。在这里,我们试图预测一支球队在板球比赛的每一局结束时获胜的概率。数据以(局数、结束数、跑数/结束数、跑率)作为训练序列X和(目标)作为训练序列Y进行训练 目标变量由以下规则分配,即如果团队1=赢家,则1=其他0 |Match_Id|Inning|Over|Team 1|Team 2|winner|Bat_Team|Runs/Over|Inning Score|Run_Rate |Target|Pred | |--------|------|----|--

下面是带有XGBOOST预测的验证表示例。在这里,我们试图预测一支球队在板球比赛的每一局结束时获胜的概率。数据以(局数、结束数、跑数/结束数、跑率)作为训练序列X和(目标)作为训练序列Y进行训练

目标变量由以下规则分配,即如果团队1=赢家,则1=其他0

|Match_Id|Inning|Over|Team 1|Team 2|winner|Bat_Team|Runs/Over|Inning Score|Run_Rate |Target|Pred |
|--------|------|----|------|------|------|--------|---------|------------|---------|------|-----|
| a      |    1 |  1 |   A1 |   A2 |   A2 |   A2   |   8     |     8      |    8.0  |   0  | 0.70|
| a      |    1 |  2 |   A1 |   A2 |   A2 |   A2   |   16    |     24     |    12.0 |   0  | 0.82|
| a      |    2 |  1 |   A1 |   A2 |   A2 |   A1   |   7     |     7      |    7.0  |   0  | 0.87|
| a      |    2 |  2 |   A1 |   A2 |   A2 |   A1   |   5     |     12     |    6.0  |   0  | 0.95|
| b      |    1 |  1 |   A3 |   A2 |   A3 |   A3   |   22    |     22     |    22.0 |   1  | 0.96|
| b      |    1 |  2 |   A3 |   A2 |   A3 |   A3   |   16    |     38     |    19.0 |   1  | 0.82|
| b      |    2 |  1 |   A3 |   A2 |   A3 |   A2   |   23    |     23     |    23.0 |   1  | 0.32|
| b      |    2 |  2 |   A3 |   A2 |   A3 |   A2   |   12    |     35     |    17.5 |   1  | 0.90|
我们在模型中使用了以下参数:

Import xgboost as xgb

def runXGB(train_X, train_y, seed_val=2):
 param = {}
 param['objective'] = 'binary:logistic'
 param['eta'] = 0.05
 param['max_depth'] = 8
 param['silent'] = 1
 param['eval_metric'] = "auc"
 param['min_child_weight'] = 1
 param['subsample'] = 0.7
 param['colsample_bytree'] = 0.7
 param['seed'] = seed_val
 num_rounds = 100

 plst = list(param.items())
 xgtrain = xgb.DMatrix(train_X, label=train_y)
 model = xgb.train(plst, xgtrain, num_rounds)
 return model




model = runXGB(dev_X, dev_y)
xgtest = xgb.DMatrix(val_X)
preds = model.predict(xgtest)
我的疑问如下:-

  • 我们如何推断目标变量的预测概率,即0对应于0.70
  • 我们如何衡量/评估XGBOOST模型的准确性
  • 让我们看两个场景,一个模型以70%的概率预测胜利者,另一个以60%的概率预测胜利者,所以 我应该得出什么结论?哪一个更好,为什么