Python 逻辑回归;lbfgs“;在skLearn中,不随“增加”而提高准确性;最大值;

Python 逻辑回归;lbfgs“;在skLearn中,不随“增加”而提高准确性;最大值;,python,machine-learning,scikit-learn,logistic-regression,Python,Machine Learning,Scikit Learn,Logistic Regression,我用“solver=lbfgs”和不同的“max_iter”训练了一个逻辑回归模型 # Applying logistic regression by solver = 'lbfgs' on standard scaled values with different max_iter. def lbfgs( max_iter ) : log_reg_func_std_scale_lbfgs = LogisticRegression( solver = 'lbfgs', max_iter

我用“solver=lbfgs”和不同的“max_iter”训练了一个逻辑回归模型

# Applying logistic regression by solver = 'lbfgs' on standard scaled values with different max_iter.

def lbfgs( max_iter ) :
    log_reg_func_std_scale_lbfgs = LogisticRegression( solver = 'lbfgs', max_iter = max_iter )
    log_reg_model_std_scale_lbfgs = log_reg_func_std_scale_lbfgs.fit( x_train_std_scale, y_train )
    return log_reg_func_std_scale_lbfgs


max_iter_values = [ 10, 20, 50, 100, 1000 ]

for max_iter in max_iter_values :
    log_reg_func_std_scale_lbfgs = lbfgs( max_iter )
    print( max_iter )
    predict_train_std_scale_lbfgs = log_reg_func.predict( x_train_std_scale )
    acc_train_std_scale_lbfgs = ( predict_train_std_scale_lbfgs == y_train ).mean() * 100
    print( acc_train_std_scale_lbfgs, log_reg_func_std_scale_lbfgs.score( x_train_std_scale, y_train ) )
    cm_std_scale_lbfgs = metrics.confusion_matrix( y_train, predict_train_std_scale_lbfgs )
    print( cm_std_scale_lbfgs )
    print( '\n\n' )
我得到了这些结果:

10
10.105248185941043 0.8948095238095238
[[3945    3   62   12    7   37   45    4    4   13]
 [   0 4660    9    2    1    4    3    2    2    1]
 [  66  303 3303  176   30   10  137   68   60   24]
 [  52  352   84 3490   18   53   26  110   56  110]
 [  16   68   51   10 3563   49   29   45    4  237]
 [  83  209   43  154  176 2705  174   37   78  136]
 [  37   39   96    2   14   25 3912    5    7    0]
 [   6   75   19    8   17    7    1 4081    1  186]
 [  77  863   70   75  129  367   40   59 2191  192]
 [  22  101    9   28  169   12    0  168    9 3670]]

20
10.105248185941043 0.9284523809523809
EXACTLY SAME CONFUSION MATRIX

50
10.105248185941043 0.9362380952380952
EXACTLY SAME CONFUSION MATRIX

100
10.105248185941043 0.9368095238095238
EXACTLY SAME CONFUSION MATRIX

1000
10.105248185941043 0.9371666666666667
EXACTLY SAME CONFUSION MATRIX
我有3个问题:

  • 为什么acc\U train\U std\U scale\U lbfgs没有随着迭代次数的增加而增加(或改变)(max\U iter

  • 为什么log\u reg\u func\u std\u scale\u lbfgscm\u std\u scale\u lbfgs值完全相同时,得分(x\u train\u std\u scale,y\u train)会(略微)增加

  • 模型是否随着max_iter值的增加而改善


  • 启用“详细”以查看它是否已聚合。如果趋同正在发生,就没有理由继续下去。与SGD和co.相比,lbfgs非常不同,需要的迭代次数要少得多(参见默认值val=100),并且在不进行超参数调整的情况下保证收敛。