Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/316.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何在python中为keras模型搜索不同的值?_Python_Python 3.x_Keras_Lstm - Fatal编程技术网

如何在python中为keras模型搜索不同的值?

如何在python中为keras模型搜索不同的值?,python,python-3.x,keras,lstm,Python,Python 3.x,Keras,Lstm,我在keras中实现了LSTM 因此,我使用以下三个值: 嵌入尺寸 隐藏层大小 学习率 我现在想找到最适合我的模型的值。例如,我可以为每个属性分配3个值(比如[嵌入大小:[100150200],隐藏层大小:[50100150],学习率:[0.015,0.01,0.005]) 我现在想知道的是哪种组合在我的功能中效果最好。我想我可以这样构建我的函数: def lstm(embedding_size, hidden_layer_size, learning_rate): return s

我在keras中实现了LSTM

因此,我使用以下三个值:

  • 嵌入尺寸
  • 隐藏层大小
  • 学习率
我现在想找到最适合我的模型的值。例如,我可以为每个属性分配3个值(比如
[嵌入大小:[100150200],隐藏层大小:[50100150],学习率:[0.015,0.01,0.005]

我现在想知道的是哪种组合在我的功能中效果最好。我想我可以这样构建我的函数:

def lstm(embedding_size, hidden_layer_size, learning_rate):
    return score
得分最高的人价值最高

我知道scikit learn为此提供了函数,但我不知道如何将它们与自定义函数一起使用(如果可能的话)。这是我找到的一个来源:


有人能帮助我如何使用库来解决问题或创建自定义函数来比较所有值吗?

使用
hyperopt
。以下是一个随机森林的示例:

from sklearn.ensemble import RandomForestClassifier

from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score,precision_score,confusion_matrix,f1_score,recall_score

def accuracy(params):
    clf = RandomForestClassifier(**params)
    clf.fit(x_train,y_train)
    return clf.score(x_test, y_test)


parameters = {
    'max_depth': hp.choice('max_depth', range(80,120)),
    'max_features': hp.choice('max_features', range(30,x_train.shape[1])),
    'n_estimators': hp.choice('n_estimators', range(30,100)),
    "max_leaf_nodes":hp.choice("max_leaf_nodes",range(2,8)),
    "min_samples_leaf":hp.choice("min_samples_leaf",range(1,30)),
    "min_samples_split":hp.choice("min_samples_split",range(2,100)),
    'criterion': hp.choice('criterion', ["gini", "entropy"])}


best = 0
def f(params):
    global best
    acc = accuracy(params)
    if acc > best:
        best = acc
    print ('Improving:', best, params)
    return {'loss': -acc, 'status': STATUS_OK}

trials = Trials()

best = fmin(f, parameters, algo=tpe.suggest, max_evals=100, trials=trials)
print ('best:',best)