Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/331.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
CNTK python api-继续分类器培训_Python_Cntk - Fatal编程技术网

CNTK python api-继续分类器培训

CNTK python api-继续分类器培训,python,cntk,Python,Cntk,这其实不是这里的问题。。。它们是相关的,但它们并不相同 我训练了1500个时代的模特,平均损失67%左右。然后我想继续培训,我的代码如下: def Create_Trainer(train_reader, minibatch_size, epoch_size, checkpoint_path=None, distributed_after=INFINITE_SAMPLES): #Create Model with Params lr_per_minibatch = learning_rate_s

这其实不是这里的问题。。。它们是相关的,但它们并不相同

我训练了1500个时代的模特,平均损失67%左右。然后我想继续培训,我的代码如下:

def Create_Trainer(train_reader, minibatch_size, epoch_size, checkpoint_path=None, distributed_after=INFINITE_SAMPLES):
#Create Model with Params
lr_per_minibatch = learning_rate_schedule(
    [0.01] * 10 + [0.003] * 10 + [0.001], UnitType.minibatch, epoch_size)
momentum_time_constant = momentum_as_time_constant_schedule(
    -minibatch_size / np.log(0.9))
l2_reg_weight = 0.0001
input_var = input_variable((num_channels, image_height, image_width))
label_var = input_variable((num_classes))
feature_scale = 1.0 / 256.0
input_var_norm = element_times(feature_scale, input_var)
z = create_model(input_var_norm, num_classes)
#Create Error Functions
if(checkpoint_path):
    print('Loaded Checkpoint!')
    z.load_model(checkpoint_path)
ce = cross_entropy_with_softmax(z, label_var)
pe = classification_error(z, label_var)    

#Create Learner    
learner = momentum_sgd(z.parameters,
                        lr=lr_per_minibatch, momentum=momentum_time_constant,
                        l2_regularization_weight=l2_reg_weight)
if(distributed_after != INFINITE_SAMPLES):
    learner = distributed.data_parallel_distributed_learner(
        learner = learner,
        num_quantization_bits = 1,
        distributed_after = distributed_after
    )
input_map = {
    input_var: train_reader.streams.features,
    label_var: train_reader.streams.labels
}
return Trainer(z, ce, pe, learner), input_map
请注意代码行:if(checkpoint_path):大约在中间

我从以前的培训中加载.dnn文件,该文件通过此函数保存

if current_epoch % checkpoint_frequency == 0:
            trainer.save_checkpoint(os.path.join(checkpoint_path + "_{}.dnn".format(current_epoch)))
这实际上会生成一个.dnn和一个.dnn.ckp文件。显然,我只在load_模型中加载.dnn文件

当我重新启动训练并加载模型时,它看起来好像是在加载网络架构,但可能不是权重?这样做的正确方法是什么


谢谢

您需要使用trainer.restore\u from\u checkpoint,这将重新创建培训师和学习者

很快,培训课程将开始,它将允许以一种简单的方式进行无缝恢复,同时考虑培训师/小批量/分布式状态

有一点很重要:在python脚本中,在创建检查点和从检查点恢复时,网络结构和创建节点的顺序必须相同