Python 在字典中用括号更新值

Python 在字典中用括号更新值,python,machine-learning,scikit-learn,jupyter-notebook,Python,Machine Learning,Scikit Learn,Jupyter Notebook,我尝试使用for循环替换字典中的值。但这有点特殊,因为括号内有值 我的问题是:如何更新字典括号内的值 需要 相应地更新n_估计值 使用BaggingClassifier和RandomForestClassifier等模型相应更新分类器 初始化 环路 当前结果 似乎我做得不对,因为我添加了一个新值,而不是更新当前值。似乎您添加了新值。对于更新,您必须使用数组索引赋值我猜您可以完全摆脱字典。以下是创建具有不同参数的不同分类器实例的可能方法: from sklearn.ensemble import

我尝试使用for循环替换字典中的值。但这有点特殊,因为括号内有值

我的问题是:如何更新字典括号内的值

需要

相应地更新n_估计值

使用BaggingClassifier和RandomForestClassifier等模型相应更新分类器

初始化

环路

当前结果


似乎我做得不对,因为我添加了一个新值,而不是更新当前值。

似乎您添加了新值。对于更新,您必须使用数组索引赋值

我猜您可以完全摆脱字典。以下是创建具有不同参数的不同分类器实例的可能方法:

from sklearn.ensemble import RandomForestClassifier, BaggingClassifier

for model in [RandomForestClassifier, BaggingClassifier]:
    for n in [5, 10, 20]:
        clf = model(random_state=12345, n_estimators=n)
        print(clf)
上述代码产生:

RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, n_estimators=5, n_jobs=1,
            oob_score=False, random_state=12345, verbose=0,
            warm_start=False)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
            oob_score=False, random_state=12345, verbose=0,
            warm_start=False)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=1,
            oob_score=False, random_state=12345, verbose=0,
            warm_start=False)
BaggingClassifier(base_estimator=None, bootstrap=True,
         bootstrap_features=False, max_features=1.0, max_samples=1.0,
         n_estimators=5, n_jobs=1, oob_score=False, random_state=12345,
         verbose=0, warm_start=False)
BaggingClassifier(base_estimator=None, bootstrap=True,
         bootstrap_features=False, max_features=1.0, max_samples=1.0,
         n_estimators=10, n_jobs=1, oob_score=False, random_state=12345,
         verbose=0, warm_start=False)
BaggingClassifier(base_estimator=None, bootstrap=True,
         bootstrap_features=False, max_features=1.0, max_samples=1.0,
         n_estimators=20, n_jobs=1, oob_score=False, random_state=12345,
         verbose=0, warm_start=False)
Bagging
RandomForest

BaggingClassifier(base_estimator=None, bootstrap=True,
         bootstrap_features=False, max_features=1.0, max_samples=1.0,
         n_estimators=10, n_jobs=1, oob_score=False, random_state=12345,
         verbose=0, warm_start=False)
from sklearn.ensemble import RandomForestClassifier, BaggingClassifier

for model in [RandomForestClassifier, BaggingClassifier]:
    for n in [5, 10, 20]:
        clf = model(random_state=12345, n_estimators=n)
        print(clf)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, n_estimators=5, n_jobs=1,
            oob_score=False, random_state=12345, verbose=0,
            warm_start=False)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1,
            oob_score=False, random_state=12345, verbose=0,
            warm_start=False)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, n_estimators=20, n_jobs=1,
            oob_score=False, random_state=12345, verbose=0,
            warm_start=False)
BaggingClassifier(base_estimator=None, bootstrap=True,
         bootstrap_features=False, max_features=1.0, max_samples=1.0,
         n_estimators=5, n_jobs=1, oob_score=False, random_state=12345,
         verbose=0, warm_start=False)
BaggingClassifier(base_estimator=None, bootstrap=True,
         bootstrap_features=False, max_features=1.0, max_samples=1.0,
         n_estimators=10, n_jobs=1, oob_score=False, random_state=12345,
         verbose=0, warm_start=False)
BaggingClassifier(base_estimator=None, bootstrap=True,
         bootstrap_features=False, max_features=1.0, max_samples=1.0,
         n_estimators=20, n_jobs=1, oob_score=False, random_state=12345,
         verbose=0, warm_start=False)