Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python GridSeachCV在tensorflow的深度NN中找到最佳超参数有什么有效的替代方法吗?_Python_Tensorflow_Optimization_Keras_Deep Learning - Fatal编程技术网

Python GridSeachCV在tensorflow的深度NN中找到最佳超参数有什么有效的替代方法吗?

Python GridSeachCV在tensorflow的深度NN中找到最佳超参数有什么有效的替代方法吗?,python,tensorflow,optimization,keras,deep-learning,Python,Tensorflow,Optimization,Keras,Deep Learning,在我的图像分类实验中,我想找到CNN模型的超参数,我使用了RandomizedSearchCV来找到它可能的最佳超参数,但我得到了以下参数错误,我不知道为什么 TypeError回溯(最近的调用 最后) 在() 41 42模型,pred=算法_管道(X_序列,X_测试,y_序列,y_测试,模型, --->43参数网格,cv=5,评分(拟合=(负对数损失) 44 45打印(型号最佳评分) 32帧 /deepcopy中的usr/lib/python3.6/copy.py(x,memo,_nil) 1

在我的图像分类实验中,我想找到CNN模型的超参数,我使用了
RandomizedSearchCV
来找到它可能的最佳超参数,但我得到了以下参数错误,我不知道为什么

TypeError回溯(最近的调用 最后)

在() 41 42模型,pred=算法_管道(X_序列,X_测试,y_序列,y_测试,模型, --->43参数网格,cv=5,评分(拟合=(负对数损失) 44 45打印(型号最佳评分)

32帧

/deepcopy中的usr/lib/python3.6/copy.py(x,memo,_nil) 167减速机=getattr(x,“减速机”,无) 168如果减速器: -->169 rv=减速器(4) 170其他: 171减速机=getattr(x,减速机,无)

TypeError:无法pickle\u thread.RLock对象

我在
中研究了这类错误的可能解决方案,所以
但错误仍然没有消失

这是我的部分代码,包括中的错误报告。我不明白我的尝试出了什么问题。有人能告诉我如何消除这个错误吗?谢谢

我当前的尝试

from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV

def algorithm_pipeline(X_train_data, X_test_data, y_train_data, y_test_data, 
                       model, param_grid, cv=2, scoring_fit='neg_mean_squared_error',
                       do_probabilities = False):
    gs = RandomizedSearchCV(
        estimator=model,
        param_distributions=param_grid, 
        cv=cv, 
        n_jobs=-1, 
        scoring=scoring_fit,
        verbose=2
    )
    fitted_model = gs.fit(X_train_data, y_train_data)
    
    if do_probabilities:
      pred = fitted_model.predict_proba(X_test_data)
    else:
      pred = fitted_model.predict(X_test_data)
    
    return fitted_model, pred
请参阅中的完整代码,包括错误报告

我不明白错误从何而来。如何消除上述错误?有什么想法吗

更新

我还通过使用
scikit optimize
包尝试了贝叶斯优化方法,如下所示,但我也遇到了错误


有人能帮我如何在中进行
GridSearchCV
Bayesian优化尝试
吗?有什么可能的帮助使这可行吗?有什么想法吗?谢谢

sklearn
GridSearchCV
不会直接在keras模型上工作。您必须使用keras scikit_learn包装器使其与sklearn API一起工作。例如,必须使用
KerasClassifier

示例代码 代码内联记录

from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
import numpy
from sklearn import datasets 

# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target

# Create a function that returns a model
# You can use additional parameters to this function,
# which will be passed by the GridSearchCV, for parameter tuning.
def get_model(opt, activation):
  model = Sequential()
  model.add(Dense(8, input_dim=4, activation=activation))
  model.add(Dense(4, activation=activation))
  model.add(Dense(3, activation='softmax'))
  
  model.compile(loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
  return model

# Wrap it to make it sklearn compatible
model = KerasClassifier(build_fn=get_model)

# Create the parameter grid
param_grid = dict(epochs=[1,2,3], opt = ['rmsprop', 'adam'], activation=['relu', 'tanh'])

# finally run 
grid = GridSearchCV(estimator=model, param_grid=param_grid)
result = grid.fit(X, y)

# Get the gridsearch best parameters
print(f"Best score: {result.best_score_}, Parameters: {result.best_params_}")
输出:

4/4 [==============================] - 0s 2ms/step - loss: 1.2447 - accuracy: 0.4167
1/1 [==============================] - 0s 5ms/step - loss: 1.4246 - accuracy: 0.0000e+00
4/4 [==============================] - 0s 2ms/step - loss: 1.9505 - accuracy: 0.2500
1/1 [==============================] - 0s 1ms/step - loss: 0.8273 - accuracy: 0.6667
4/4 [==============================] - 0s 2ms/step - loss: 1.0976 - accuracy: 0.4167
............... LOG TRUNCATED........
Epoch 1/2
5/5 [==============================] - 0s 2ms/step - loss: 1.1047 - accuracy: 0.3333
Epoch 2/2
5/5 [==============================] - 0s 1ms/step - loss: 1.0931 - accuracy: 0.3333
Best score: 0.5000000089406967, Parameters: {'activation': 'relu', 'epochs': 2, 'opt': 'adam'}
或者对于
随机搜索CV
使用

grid = RandomizedSearchCV(estimator=model, param_distributions=param_grid)
编辑1 基于AX框架的bayes优化参数整定 代码是内联的

from keras.models import Sequential
from keras.layers import Dense, Dropout
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn import datasets
from ax.service.managed_loop import optimize
from keras.optimizers import Adam, RMSprop


# Seed to for reproducible results
np.random.seed(3)

# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target

# Train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=3)

# Create a function that returns a model
def get_model(opt, activation, dropout, lr):
  model = Sequential()
  model.add(Dense(8, input_dim=4, activation=activation))
  model.add(Dropout(dropout))
  model.add(Dense(4, activation=activation))
  model.add(Dense(3, activation='softmax'))
  
  if opt == 'adam':
    optimizer = Adam(lr=lr)
  elif opt == 'rmsprop':
    optimizer = RMSprop(lr=lr)

  model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
  return model

# Method that creates a model and trains it based on the hyper parameters
# Once the model trained we evaluate it on test data and return the accuracy
# This accuracy value will be used by the Bayes optimization
# to identify the next set of hyper-parameters to be used. 
def train_evaluate(parameterization):
    acc = 0
    mymodel = get_model(opt=parameterization["opt"], activation=parameterization["activation"], dropout=parameterization["dropout"], lr=parameterization["lr"])
    mymodel.fit(X_train, y_train, epochs=parameterization["epochs"], verbose=0)
    acc = mymodel.evaluate(X_test, y_test)[1]
    print(parameterization, acc)
    del mymodel
    return acc

# Finally run the Bayes optimization
best_parameters, values, experiment, model = optimize(
     parameters=[
                 {"name": "opt", "type": "choice", "values": ['adam', 'rmsprop']},
                 {"name": "activation", "type": "choice", "values": ['relu', 'tanh']},
                 {"name": "dropout", "type": "choice", "values": [0.0, 0.25, 0.50, 0.75, 0.99]},
                 {"name": "epochs", "type": "choice", "values": [10, 50, 100]},
                 {"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True}
                ],
    evaluation_function=train_evaluate,
    objective_name="acc",
     # total trails. Based on the dateset size you can change this value. This value is used to determine the number of exploration and exploitation steps. 
    total_trials=30,
    )

# Get the best hyper parameters
data = experiment.fetch_data()
df = data.df
best_arm_name = df.arm_name[df["mean"] == df["mean"].max()].values[0]
best_arm = experiment.arms_by_name[best_arm_name]

print(best_parameters)
print(best_arm)
输出

[INFO 08-12 13:23:03] ax.modelbridge.dispatch_utils: Using Sobol generation strategy.
[INFO 08-12 13:23:03] ax.service.managed_loop: Started full optimization with 30 steps.
[INFO 08-12 13:23:03] ax.service.managed_loop: Running optimization trial 1...
2/2 [==============================] - 0s 2ms/step - loss: 0.8532 - accuracy: 0.6579
[INFO 08-12 13:23:04] ax.service.managed_loop: Running optimization trial 2...
{'opt': 'rmsprop', 'activation': 'tanh', 'dropout': 0.75, 'epochs': 50} 0.6578947305679321
2/2 [==============================] - 0s 3ms/step - loss: 1.2705 - accuracy: 0.2895
[INFO 08-12 13:23:05] ax.service.managed_loop: Running optimization trial 3...
{'opt': 'adam', 'activation': 'relu', 'dropout': 0.99, 'epochs': 10} 0.28947368264198303
2/2 [==============================] - 0s 2ms/step - loss: 0.3625 - accuracy: 0.9737
[INFO 08-12 13:23:06] ax.service.managed_loop: Running optimization trial 4...
............... LOG TRUNCATED, RUN for 3 minutes........
2/2 [==============================] - 0s 2ms/step - loss: 0.9861 - accuracy: 0.5000
[INFO 08-12 13:23:29] ax.service.managed_loop: Running optimization trial 29...
{'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 10} 0.5
2/2 [==============================] - 0s 2ms/step - loss: 0.9654 - accuracy: 0.3158
[INFO 08-12 13:23:30] ax.service.managed_loop: Running optimization trial 30...
2/2 [==============================] - 0s 3ms/step - loss: 1.1320 - accuracy: 0.6842
{'opt': 'adam', 'activation': 'tanh', 'dropout': 0.99, 'epochs': 10} 0.6842105388641357
{'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 100}
Arm(name='8_0', parameters={'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 100})
从日志中可以看出,此示例的最佳超参数为

parameters={'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 100})
最后,您可以使用

train_evaluate(best_parameters)

sklearn
GridSearchCV
不会直接在keras模型上工作。您必须使用keras scikit_learn包装器使其与sklearn API一起工作。例如,必须使用
KerasClassifier

示例代码 代码内联记录

from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
import numpy
from sklearn import datasets 

# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target

# Create a function that returns a model
# You can use additional parameters to this function,
# which will be passed by the GridSearchCV, for parameter tuning.
def get_model(opt, activation):
  model = Sequential()
  model.add(Dense(8, input_dim=4, activation=activation))
  model.add(Dense(4, activation=activation))
  model.add(Dense(3, activation='softmax'))
  
  model.compile(loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
  return model

# Wrap it to make it sklearn compatible
model = KerasClassifier(build_fn=get_model)

# Create the parameter grid
param_grid = dict(epochs=[1,2,3], opt = ['rmsprop', 'adam'], activation=['relu', 'tanh'])

# finally run 
grid = GridSearchCV(estimator=model, param_grid=param_grid)
result = grid.fit(X, y)

# Get the gridsearch best parameters
print(f"Best score: {result.best_score_}, Parameters: {result.best_params_}")
输出:

4/4 [==============================] - 0s 2ms/step - loss: 1.2447 - accuracy: 0.4167
1/1 [==============================] - 0s 5ms/step - loss: 1.4246 - accuracy: 0.0000e+00
4/4 [==============================] - 0s 2ms/step - loss: 1.9505 - accuracy: 0.2500
1/1 [==============================] - 0s 1ms/step - loss: 0.8273 - accuracy: 0.6667
4/4 [==============================] - 0s 2ms/step - loss: 1.0976 - accuracy: 0.4167
............... LOG TRUNCATED........
Epoch 1/2
5/5 [==============================] - 0s 2ms/step - loss: 1.1047 - accuracy: 0.3333
Epoch 2/2
5/5 [==============================] - 0s 1ms/step - loss: 1.0931 - accuracy: 0.3333
Best score: 0.5000000089406967, Parameters: {'activation': 'relu', 'epochs': 2, 'opt': 'adam'}
或者对于
随机搜索CV
使用

grid = RandomizedSearchCV(estimator=model, param_distributions=param_grid)
编辑1 基于AX框架的bayes优化参数整定 代码是内联的

from keras.models import Sequential
from keras.layers import Dense, Dropout
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn import datasets
from ax.service.managed_loop import optimize
from keras.optimizers import Adam, RMSprop


# Seed to for reproducible results
np.random.seed(3)

# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target

# Train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=3)

# Create a function that returns a model
def get_model(opt, activation, dropout, lr):
  model = Sequential()
  model.add(Dense(8, input_dim=4, activation=activation))
  model.add(Dropout(dropout))
  model.add(Dense(4, activation=activation))
  model.add(Dense(3, activation='softmax'))
  
  if opt == 'adam':
    optimizer = Adam(lr=lr)
  elif opt == 'rmsprop':
    optimizer = RMSprop(lr=lr)

  model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
  return model

# Method that creates a model and trains it based on the hyper parameters
# Once the model trained we evaluate it on test data and return the accuracy
# This accuracy value will be used by the Bayes optimization
# to identify the next set of hyper-parameters to be used. 
def train_evaluate(parameterization):
    acc = 0
    mymodel = get_model(opt=parameterization["opt"], activation=parameterization["activation"], dropout=parameterization["dropout"], lr=parameterization["lr"])
    mymodel.fit(X_train, y_train, epochs=parameterization["epochs"], verbose=0)
    acc = mymodel.evaluate(X_test, y_test)[1]
    print(parameterization, acc)
    del mymodel
    return acc

# Finally run the Bayes optimization
best_parameters, values, experiment, model = optimize(
     parameters=[
                 {"name": "opt", "type": "choice", "values": ['adam', 'rmsprop']},
                 {"name": "activation", "type": "choice", "values": ['relu', 'tanh']},
                 {"name": "dropout", "type": "choice", "values": [0.0, 0.25, 0.50, 0.75, 0.99]},
                 {"name": "epochs", "type": "choice", "values": [10, 50, 100]},
                 {"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True}
                ],
    evaluation_function=train_evaluate,
    objective_name="acc",
     # total trails. Based on the dateset size you can change this value. This value is used to determine the number of exploration and exploitation steps. 
    total_trials=30,
    )

# Get the best hyper parameters
data = experiment.fetch_data()
df = data.df
best_arm_name = df.arm_name[df["mean"] == df["mean"].max()].values[0]
best_arm = experiment.arms_by_name[best_arm_name]

print(best_parameters)
print(best_arm)
输出

[INFO 08-12 13:23:03] ax.modelbridge.dispatch_utils: Using Sobol generation strategy.
[INFO 08-12 13:23:03] ax.service.managed_loop: Started full optimization with 30 steps.
[INFO 08-12 13:23:03] ax.service.managed_loop: Running optimization trial 1...
2/2 [==============================] - 0s 2ms/step - loss: 0.8532 - accuracy: 0.6579
[INFO 08-12 13:23:04] ax.service.managed_loop: Running optimization trial 2...
{'opt': 'rmsprop', 'activation': 'tanh', 'dropout': 0.75, 'epochs': 50} 0.6578947305679321
2/2 [==============================] - 0s 3ms/step - loss: 1.2705 - accuracy: 0.2895
[INFO 08-12 13:23:05] ax.service.managed_loop: Running optimization trial 3...
{'opt': 'adam', 'activation': 'relu', 'dropout': 0.99, 'epochs': 10} 0.28947368264198303
2/2 [==============================] - 0s 2ms/step - loss: 0.3625 - accuracy: 0.9737
[INFO 08-12 13:23:06] ax.service.managed_loop: Running optimization trial 4...
............... LOG TRUNCATED, RUN for 3 minutes........
2/2 [==============================] - 0s 2ms/step - loss: 0.9861 - accuracy: 0.5000
[INFO 08-12 13:23:29] ax.service.managed_loop: Running optimization trial 29...
{'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 10} 0.5
2/2 [==============================] - 0s 2ms/step - loss: 0.9654 - accuracy: 0.3158
[INFO 08-12 13:23:30] ax.service.managed_loop: Running optimization trial 30...
2/2 [==============================] - 0s 3ms/step - loss: 1.1320 - accuracy: 0.6842
{'opt': 'adam', 'activation': 'tanh', 'dropout': 0.99, 'epochs': 10} 0.6842105388641357
{'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 100}
Arm(name='8_0', parameters={'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 100})
从日志中可以看出,此示例的最佳超参数为

parameters={'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 100})
最后,您可以使用

train_evaluate(best_parameters)

@谢谢,我修复了断开的链接,请看我的更新帖子。我在下面尝试了你的尝试,但对于非常深的NN来说,这太耗时了。您是否有其他高效的超参数优化尝试?你介意看一看吗?你能在gist或colab中发布你最新的有希望的输出吗?谢谢你@木吉加你在看哪个科拉布?我确实是这样做的,但在第一个时代优化失败后。这是。有什么想法可以解决这个问题吗?非常感谢@谢谢你的纠正,它成功了。为什么每次优化运行都是随机启动的?我们必须保持稳定吗?它会影响优化过程吗?在您的尝试中,我们如何将
学习率
内核大小
作为优化参数?有什么想法吗?关于另一个问题,是的,我们可以让框架从一个范围中识别,例如使用
{“name”:“lr”,“type”:“range”,“bounds”:[1e-6,0.4],“log_scale”:True}
AX非常强大,请阅读AX文档以了解其全部功能。更新了答案以显示如何使用
lr
。根据每个历元所用的时间,您可以设置总的试验次数。对于一个大数据集/模型来说,任何介于50到100之间的值都足够了。@mujjiga谢谢,我修复了断开的链接,请参阅我更新的帖子。我在下面尝试了你的尝试,但对于非常深的NN来说,这太耗时了。您是否有其他高效的超参数优化尝试?你介意看一看吗?你能在gist或colab中发布你最新的有希望的输出吗?谢谢你@木吉加你在看哪个科拉布?我确实是这样做的,但在第一个时代优化失败后。这是。有什么想法可以解决这个问题吗?非常感谢@谢谢你的纠正,它成功了。为什么每次优化运行都是随机启动的?我们必须保持稳定吗?它会影响优化过程吗?在您的尝试中,我们如何将
学习率
内核大小
作为优化参数?有什么想法吗?关于另一个问题,是的,我们可以让框架从一个范围中识别,例如使用
{“name”:“lr”,“type”:“range”,“bounds”:[1e-6,0.4],“log_scale”:True}
AX非常强大,请阅读AX文档以了解其全部功能。更新了答案以显示如何使用
lr
。根据每个历元所用的时间,您可以设置总的试验次数。对于大型数据集/模型,任何介于50到100之间的值都足够好。