Machine learning pipeops使参数不可用于mlr3proba中的调谐

Machine learning pipeops使参数不可用于mlr3proba中的调谐,machine-learning,survival-analysis,mlr3,Machine Learning,Survival Analysis,Mlr3,我使用mlr3proba软件包进行机器学习生存分析。我的数据集包含因子、数字和整数特征。 我使用“缩放”和“编码”pipeops对我的数据集进行预处理,以获得deephit和deepsurv神经网络方法,代码如下: 非常感谢您的帮助。您好,谢谢您使用mlr3proba!这是因为参数名在管道中包装时会更改,您可以在下面的示例中看到这一点。有几个选项可以解决这个问题,您可以在下面的PipeOps选项1中换行后更改参数ID以匹配新名称,或者您可以先为学习者指定调谐范围,然后在下面的PipeOps选项2

我使用mlr3proba软件包进行机器学习生存分析。我的数据集包含因子、数字和整数特征。 我使用“缩放”和“编码”pipeops对我的数据集进行预处理,以获得deephit和deepsurv神经网络方法,代码如下:


非常感谢您的帮助。

您好,谢谢您使用mlr3proba!这是因为参数名在管道中包装时会更改,您可以在下面的示例中看到这一点。有几个选项可以解决这个问题,您可以在下面的PipeOps选项1中换行后更改参数ID以匹配新名称,或者您可以先为学习者指定调谐范围,然后在下面的PipeOps选项2中换行,或者您可以使用自动调谐器将其换行到PipeOps中。我在中使用最后一个选项

图书馆MLR3Proba 图书馆MLR3 图书馆悖论 图书馆MLR3调谐 图书馆学习者 图书馆MLR3管道 task.mlr[5]scale.robust scale.affect_列 >[7]surv.deephit.frac surv.deephit.cuts >[9]surv.deephit.cutpoints surv.deephit.scheme >[11]surv.deephit.cut_min surv.deephit.num_节点 >[13]surv.deephit.batch_norm surv.deephit.dropout >[15]surv.deephit.activation surv.deephit.custom_net >[17]surv.deephit.device surv.deephit.mod_alpha >[19]surv.deephit.sigma surv.deephit.shrink >[21]surv.deephit.optimizer surv.deephit.rho >[23]surv.deephit.eps surv.deephit.lr >[25]surv.deephit.weight\u decause surv.deephit.learning\u rate >[27]surv.deephit.lr_衰变surv.deephit.betas >[29]surv.deephit.amsgrad surv.deephit.lambd >[31]surv.deephit.alpha surv.deephit.t0 >[33]surv.deephit.动量surv.deephit.居中 >[35]surv.deephit.etas surv.deephit.step_尺寸 >[37]surv.deephit.damping surv.deephit.nesterov >[39]surv.deephit.batch_size surv.deephit.epochs >[41]surv.deephit.verbose surv.deephit.num_工人 >[43]surv.deephit.shuffle surv.deephit.best_weight >[45]surv.deephit.early_停止surv.deephit.min_delta >[47]surv.deephit.patience surv.deephit.interpolate >[49]surv.deephit.inter_方案surv.deephit.sub nn.search_space INFO[08:15:29.841][bbotk]评估1个配置 >信息[08:15:30.115][mlr3]通过1次重采样迭代运行基准测试 >信息[08:15:30.314][mlr3]在iter 1/1的“老鼠”任务中应用学习者的“编码、缩放、surv.deephit” >信息[08:15:39.997][mlr3]已完成基准测试 >信息[08:15:40.296][bbotk]第一批的结果: >信息[08:15:40.302][bbotk]surv.deephit.dropout surv.deephit.alpha surv.harrell\u c >信息[08:15:40.302][bbotk]0.06494213 0.7109244 0.7516212 >信息[08:15:40.302][bbotk]uhash >信息[08:15:40.302][bbotk]27794d84-ba46-4900-8835-de24fcda8c7f >信息[08:15:40.307][bbotk]1评估配置 >信息[08:15:40.395][mlr3]通过1次重采样迭代运行基准测试 >信息[08:15:40.406][mlr3]在任务“老鼠”iter 1/1中应用学习者的“编码.缩放.surv.深度命中” >信息[08:15:41.807][mlr3]已完成基准测试 >信息[08:15:41.903][bbotk]第二批的结果: >信息[08:15:41.905][bbotk]surv.deephit.dropout surv.deephit.alpha surv.harrell_c >信息[08:15:41.905][bbotk]0.05524693 0.2895437 0.7749676 >信息[08:15:41.905][bbotk]uhash >信息[08:15:41.905][bbotk]013795a3-766c-48f9-a3fe-2aae5d4cad48 >信息[08:15:41.918][bbotk]在两次评估后完成优化 >信息[08:15:41.919][bbotk]结果: >信息[08:15:41.920][bbotk]surv.deephit.dropout surv.deephit.alpha学习者参数vals x_域 >信息[08:15:41.920][bbotk]0.05524693 0.2895437 >信息[08:15:41.920][bbotk]surv.harrell\u c >信息[08:15:41.920][bbotk]0.7749676 >surv.deephit.dropout surv.deephit.alpha学习者参数vals x_域 > 1: 0.05524693 0.2895437 >哈雷尔河畔 > 1: 0.7749676 选择2 deephit.learner% poscale%>>% 波尔纳,迪普希特,学习者 deephit.learner=GraphLearner$newdeephit.learner tune.deephit=tune\u嵌套 方法=随机搜索, task=task.mlr, learner=deephit.learner, 内部重新采样=rsmpholdout, 外部重新采样=rsmpholdout, 测量值=msrsurv.cindex, 术语评估=2 >信息[08:15:43.167][mlr3]应用学习者的编码。缩放。surv。deephit。调谐“任务”老鼠“iter 1/1” >信息[08:15:43.477][bbotk]开始使用和“[secs=2]”优化2个参数 >信息[08:15:43.495][bbotk]1评估配置 >信息[08:15:43.565][mlr3]通过1次重采样迭代运行基准测试 >信息[08:15:43.5] 75][mlr3]在iter 1/1的“老鼠”任务中应用学习者的编码、比例、surv.deephit >信息[08:15:44.969][mlr3]已完成基准测试 >信息[08:15:45.058][bbotk]第一批的结果: >信息[08:15:45.064][bbotk]surv.deephit.dropout surv.deephit.alpha surv.harrell\u c >信息[08:15:45.064][bbotk]0.3492627 0.2304623 0.6745362 >信息[08:15:45.064][bbotk]uhash >信息[08:15:45.064][bbotk]4ce96658-4d4a-4835-9d9f-a93398471aed >信息[08:15:45.069][bbotk]1评估配置 >信息[08:15:45.127][mlr3]通过1次重采样迭代运行基准测试 >信息[08:15:45.136][mlr3]将学习者的编码、缩放、surv.deephit应用于任务“老鼠”iter 1/1 >信息[08:15:46.064][mlr3]已完成基准测试 >信息[08:15:46.171][bbotk]第二批的结果: >信息[08:15:46.176][bbotk]surv.deephit.dropout surv.deephit.alpha surv.harrell\u c >信息[08:15:46.176][bbotk]0.1118406 0.7810053 0.6020236 >信息[08:15:46.176][bbotk]uhash >信息[08:15:46.176][bbotk]6a065d27-a7e0-4e72-8e1e-6151408510cf >信息[08:15:46.186][bbotk]在两次评估后完成优化 >信息[08:15:46.187][bbotk]结果: >信息[08:15:46.191][bbotk]surv.deephit.dropout surv.deephit.alpha learner_param_vals x_domain >信息[08:15:46.191][bbotk]0.3492627 0.2304623 >信息[08:15:46.191][bbotk]surv.harrell_c >信息[08:15:46.191][bbotk]0.6745362
由v0.3.0于2021年4月26日创建,您好,感谢您使用mlr3proba!这是因为参数名在管道中包装时会更改,您可以在下面的示例中看到这一点。有几个选项可以解决这个问题,您可以在下面的PipeOps选项1中换行后更改参数ID以匹配新名称,或者您可以先为学习者指定调谐范围,然后在下面的PipeOps选项2中换行,或者您可以使用自动调谐器将其换行到PipeOps中。我在中使用最后一个选项

图书馆MLR3Proba 图书馆MLR3 图书馆悖论 图书馆MLR3调谐 图书馆学习者 图书馆MLR3管道 task.mlr[5]scale.robust scale.affect_列 >[7]surv.deephit.frac surv.deephit.cuts >[9]surv.deephit.cutpoints surv.deephit.scheme >[11]surv.deephit.cut_min surv.deephit.num_节点 >[13]surv.deephit.batch_norm surv.deephit.dropout >[15]surv.deephit.activation surv.deephit.custom_net >[17]surv.deephit.device surv.deephit.mod_alpha >[19]surv.deephit.sigma surv.deephit.shrink >[21]surv.deephit.optimizer surv.deephit.rho >[23]surv.deephit.eps surv.deephit.lr >[25]surv.deephit.weight\u decause surv.deephit.learning\u rate >[27]surv.deephit.lr_衰变surv.deephit.betas >[29]surv.deephit.amsgrad surv.deephit.lambd >[31]surv.deephit.alpha surv.deephit.t0 >[33]surv.deephit.动量surv.deephit.居中 >[35]surv.deephit.etas surv.deephit.step_尺寸 >[37]surv.deephit.damping surv.deephit.nesterov >[39]surv.deephit.batch_size surv.deephit.epochs >[41]surv.deephit.verbose surv.deephit.num_工人 >[43]surv.deephit.shuffle surv.deephit.best_weight >[45]surv.deephit.early_停止surv.deephit.min_delta >[47]surv.deephit.patience surv.deephit.interpolate >[49]surv.deephit.inter_方案surv.deephit.sub nn.search_space INFO[08:15:29.841][bbotk]评估1个配置 >信息[08:15:30.115][mlr3]通过1次重采样迭代运行基准测试 >信息[08:15:30.314][mlr3]在iter 1/1的“老鼠”任务中应用学习者的“编码、缩放、surv.deephit” >信息[08:15:39.997][mlr3]已完成基准测试 >信息[08:15:40.296][bbotk]第一批的结果: >信息[08:15:40.302][bbotk]surv.deephit.dropout surv.deephit.alpha surv.harrell\u c >信息[08:15:40.302][bbotk]0.06494213 0.7109244 0.7516212 >信息[08:15:40.302][bbotk]uhash >信息[08:15:40.302][bbotk]27794d84-ba46-4900-8835-de24fcda8c7f >信息[08:15:40.307][bbotk]1评估配置 >信息[08:15:40.395][mlr3]通过1次重采样迭代运行基准测试 >信息[08:15:40.406][mlr3]在任务“老鼠”iter 1/1中应用学习者的“编码.缩放.surv.深度命中” >信息[08:15:41.807][mlr3]已完成基准测试 >信息[08:15:41.903][bbotk]第二批的结果: >信息[08:15:41.905][bbotk]surv.deephit.dropout surv.deephit.alpha surv.harrell_c >信息[08:15:41.905][bbotk]0.05524693 0.2895437 0.7749676 >信息[08:15:41.905][bbotk]uhash >信息[08:15:41.905][bbotk]013795a3-766c-48f9-a3fe-2aae5d4cad48 >信息[08:15:41.918][b botk]在2次评估后完成优化 >信息[08:15:41.919][bbotk]结果: >信息[08:15:41.920][bbotk]surv.deephit.dropout surv.deephit.alpha学习者参数vals x_域 >信息[08:15:41.920][bbotk]0.05524693 0.2895437 >信息[08:15:41.920][bbotk]surv.harrell\u c >信息[08:15:41.920][bbotk]0.7749676 >surv.deephit.dropout surv.deephit.alpha学习者参数vals x_域 > 1: 0.05524693 0.2895437 >哈雷尔河畔 > 1: 0.7749676 选择2 deephit.learner% poscale%>>% 波尔纳,迪普希特,学习者 deephit.learner=GraphLearner$newdeephit.learner tune.deephit=tune\u嵌套 方法=随机搜索, task=task.mlr, learner=deephit.learner, 内部重新采样=rsmpholdout, 外部重新采样=rsmpholdout, 测量值=msrsurv.cindex, 术语评估=2 >信息[08:15:43.167][mlr3]应用学习者的编码。缩放。surv。deephit。调谐“任务”老鼠“iter 1/1” >信息[08:15:43.477][bbotk]开始使用和“[secs=2]”优化2个参数 >信息[08:15:43.495][bbotk]1评估配置 >信息[08:15:43.565][mlr3]通过1次重采样迭代运行基准测试 >信息[08:15:43.575][mlr3]在任务“老鼠”iter 1/1中应用学习者的“编码.缩放.surv.深度命中” >信息[08:15:44.969][mlr3]已完成基准测试 >信息[08:15:45.058][bbotk]第一批的结果: >信息[08:15:45.064][bbotk]surv.deephit.dropout surv.deephit.alpha surv.harrell\u c >信息[08:15:45.064][bbotk]0.3492627 0.2304623 0.6745362 >信息[08:15:45.064][bbotk]uhash >信息[08:15:45.064][bbotk]4ce96658-4d4a-4835-9d9f-a93398471aed >信息[08:15:45.069][bbotk]1评估配置 >信息[08:15:45.127][mlr3]通过1次重采样迭代运行基准测试 >信息[08:15:45.136][mlr3]将学习者的编码、缩放、surv.deephit应用于任务“老鼠”iter 1/1 >信息[08:15:46.064][mlr3]已完成基准测试 >信息[08:15:46.171][bbotk]第二批的结果: >信息[08:15:46.176][bbotk]surv.deephit.dropout surv.deephit.alpha surv.harrell\u c >信息[08:15:46.176][bbotk]0.1118406 0.7810053 0.6020236 >信息[08:15:46.176][bbotk]uhash >信息[08:15:46.176][bbotk]6a065d27-a7e0-4e72-8e1e-6151408510cf >信息[08:15:46.186][bbotk]在两次评估后完成优化 >信息[08:15:46.187][bbotk]结果: >信息[08:15:46.191][bbotk]surv.deephit.dropout surv.deephit.alpha learner_param_vals x_domain >信息[08:15:46.191][bbotk]0.3492627 0.2304623 >信息[08:15:46.191][bbotk]surv.harrell_c >信息[08:15:46.191][bbotk]0.6745362
由v0.3.0于2021年4月26日创建,谢谢。成功了,谢谢。成功了。
task.mlr <- TaskSurv$new(id = "id", backend = dataset, time = time, event = status)

inner.rsmp <- rsmp("cv", folds = 5)

measure <- msr("surv.cindex")

tuner <- tnr("random_search")

terminator <- trm("evals", n_evals = 30)

deephit.learner <- lrn("surv.deephit", optimizer = "adam", epochs = 50)

nn.search_space <- ps(dropout = p_dbl(lower = 0, upper = 1),alpha = p_dbl(lower = 0, upper = 1))

deephit.learner <- po("encode") %>>% po("scale") %>>% po("learner", deephit.learner)

deephit.instance <- TuningInstanceSingleCrit$new(
   task = task.mlr,
   learner = deephit.learner,
   search_space = nn.search_space,
   resampling = inner.rsmp,
   measure = measure,
   terminator = terminator
)

tuner$optimize(deephit.instance)
   
Error in self$assert(xs):
   Assertion on 'xs' failed: Parameter 'dropout' not available. Did you mean 'encode.method'/'encode.affect_columns' / 'scale.center'?.