Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/google-cloud-platform/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
python中变量中的函数参数名称_Python_Scikit Learn - Fatal编程技术网

python中变量中的函数参数名称

python中变量中的函数参数名称,python,scikit-learn,Python,Scikit Learn,我正在使用sklearn来训练不同的模型。我想通过sklearn的决策树分类器,同一参数的不同值,并绘制一个图。我想对许多这样的参数执行此操作。因此,我想创建一个通用函数,它可以处理所有参数及其值 我的问题是,有没有一种方法可以将参数名(而不是值)分配给变量并将其传递给我的函数 例如,-决策树采用max\u depth,min\u samples\u leaf等参数。我想尝试两个参数的不同值,一次一个,并分别为max\u depth和min\u samples\u leaf绘制结果。使用字典并将

我正在使用
sklearn
来训练不同的模型。我想通过
sklearn
的决策树分类器,同一参数的不同值,并绘制一个图。我想对许多这样的参数执行此操作。因此,我想创建一个通用函数,它可以处理所有参数及其值

我的问题是,有没有一种方法可以将参数名(而不是值)分配给变量并将其传递给我的函数


例如,-决策树采用
max\u depth
min\u samples\u leaf
等参数。我想尝试两个参数的不同值,一次一个,并分别为
max\u depth
min\u samples\u leaf
绘制结果。

使用字典并将其与
**
一起传递

kwargs = {
    "max_depth": value,
    "min_samples_leaf": value,
}
fun(**kwargs)

使用字典并将其与
**
一起传递

kwargs = {
    "max_depth": value,
    "min_samples_leaf": value,
}
fun(**kwargs)
这个解决方案不是很“Pythonic”,但很容易遵循。您可以在循环、嵌套循环或类似的方式中调用函数

dt = DecisionTreeClassifier(criterion='entropy', min_samples_leaf=150, min_samples_split=100)
是使用决策树的标准调用,只需循环使用并替换
minu samples\u leaf
minu samples\u split

from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, roc_curve, auc
from sklearn.model_selection import train_test_split

min_samples_leafs = [50, 100, 150]
min_samples_splits =[50, 100, 150]

for sample_leafs in min_samples_leafs:
    for sample_splits in min_samples_splits:

        dt = DecisionTreeClassifier(criterion='entropy', min_samples_leaf=sample_leafs, min_samples_split=sample_splits)

        X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)

        dt = dt.fit(X_train, y_train)
        y_pred_train = dt.predict(X_train)
        y_pred_test = dt.predict(X_test)


        print("Training Accuracy: %.5f" %accuracy_score(y_train, y_pred_train))
        print("Test Accuracy: %.5f" %accuracy_score(y_test, y_pred_test))
        print('sample_leafs: ', sample_leafs)
        print('sample_leafs: ', sample_splits)
        print('\n')
输出:

Training Accuracy: 0.96689
Test Accuracy: 0.96348
sample_leafs:  50
sample_leafs:  50


Training Accuracy: 0.96689
Test Accuracy: 0.96348
sample_leafs:  50
sample_leafs:  100


Training Accuracy: 0.96509
Test Accuracy: 0.96293
sample_leafs:  50
sample_leafs:  150


Training Accuracy: 0.96313
Test Accuracy: 0.96256
sample_leafs:  100
sample_leafs:  50


Training Accuracy: 0.96313
Test Accuracy: 0.96256
sample_leafs:  100
sample_leafs:  100


Training Accuracy: 0.96313
Test Accuracy: 0.96256
sample_leafs:  100
sample_leafs:  150


Training Accuracy: 0.96188
Test Accuracy: 0.96037
sample_leafs:  150
sample_leafs:  50


Training Accuracy: 0.96188
Test Accuracy: 0.96037
sample_leafs:  150
sample_leafs:  100


Training Accuracy: 0.96188
Test Accuracy: 0.96037
sample_leafs:  150
sample_leafs:  150
您可以通过像这样传递列表来实现此功能

def do_decision_tree_stuff(min_samples_leafs, min_samples_splits):
您可以这样调用函数

 do_decision_tree_stuff([50, 100, 150], [50, 100, 150])
这个解决方案不是很“Pythonic”,但很容易遵循。您可以在循环、嵌套循环或类似的方式中调用函数

dt = DecisionTreeClassifier(criterion='entropy', min_samples_leaf=150, min_samples_split=100)
是使用决策树的标准调用,只需循环使用并替换
minu samples\u leaf
minu samples\u split

from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, roc_curve, auc
from sklearn.model_selection import train_test_split

min_samples_leafs = [50, 100, 150]
min_samples_splits =[50, 100, 150]

for sample_leafs in min_samples_leafs:
    for sample_splits in min_samples_splits:

        dt = DecisionTreeClassifier(criterion='entropy', min_samples_leaf=sample_leafs, min_samples_split=sample_splits)

        X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)

        dt = dt.fit(X_train, y_train)
        y_pred_train = dt.predict(X_train)
        y_pred_test = dt.predict(X_test)


        print("Training Accuracy: %.5f" %accuracy_score(y_train, y_pred_train))
        print("Test Accuracy: %.5f" %accuracy_score(y_test, y_pred_test))
        print('sample_leafs: ', sample_leafs)
        print('sample_leafs: ', sample_splits)
        print('\n')
输出:

Training Accuracy: 0.96689
Test Accuracy: 0.96348
sample_leafs:  50
sample_leafs:  50


Training Accuracy: 0.96689
Test Accuracy: 0.96348
sample_leafs:  50
sample_leafs:  100


Training Accuracy: 0.96509
Test Accuracy: 0.96293
sample_leafs:  50
sample_leafs:  150


Training Accuracy: 0.96313
Test Accuracy: 0.96256
sample_leafs:  100
sample_leafs:  50


Training Accuracy: 0.96313
Test Accuracy: 0.96256
sample_leafs:  100
sample_leafs:  100


Training Accuracy: 0.96313
Test Accuracy: 0.96256
sample_leafs:  100
sample_leafs:  150


Training Accuracy: 0.96188
Test Accuracy: 0.96037
sample_leafs:  150
sample_leafs:  50


Training Accuracy: 0.96188
Test Accuracy: 0.96037
sample_leafs:  150
sample_leafs:  100


Training Accuracy: 0.96188
Test Accuracy: 0.96037
sample_leafs:  150
sample_leafs:  150
您可以通过像这样传递列表来实现此功能

def do_decision_tree_stuff(min_samples_leafs, min_samples_splits):
您可以这样调用函数

 do_decision_tree_stuff([50, 100, 150], [50, 100, 150])

这只解决了两个变量的问题,但不适用于多个变量。然后传递更多列表、列表列表或字典,如下所示。这只解决了两个变量的问题,但不适用于多个变量。然后传递更多列表或列表,问题是“param1”是一个字符串,而DecisionTreeClassifier(或任何函数)不接受字符串作为参数。这相当于使用DecisionTreeClassifier(“param1”=value),但我想要DecisionTreeClassifier(param1=value),但Python将字符串转换为参数名。这就是它的工作原理:在打开kwargs(使用kwargs.items())后,我将max_depth存储在fun中的变量名param1中,然后使用DecisionTreeClassifier(param1=value)。但这会给出一个错误:“param1不是有效参数”。不要这样做,只需执行DecisionTreeClassifier(**kwargs)。这就是你想要的python方式。问题是“param1”是一个字符串,而decisiontreeclassier(或任何函数)不接受字符串作为参数。这相当于使用DecisionTreeClassifier(“param1”=value),但我想要DecisionTreeClassifier(param1=value),但Python将字符串转换为参数名。这就是它的工作原理:在打开kwargs(使用kwargs.items())后,我将max_depth存储在fun中的变量名param1中,然后使用DecisionTreeClassifier(param1=value)。但这会给出一个错误:“param1不是有效参数”。不要这样做,只需执行DecisionTreeClassifier(**kwargs)。这是做你想做的事的蟒蛇式方法。