Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/291.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将learningratescheduler与keras和SGD optimizer一起使用。如何解决此错误?_Python_Tensorflow_Optimization_Keras_Deep Learning - Fatal编程技术网

Python 将learningratescheduler与keras和SGD optimizer一起使用。如何解决此错误?

Python 将learningratescheduler与keras和SGD optimizer一起使用。如何解决此错误?,python,tensorflow,optimization,keras,deep-learning,Python,Tensorflow,Optimization,Keras,Deep Learning,我想降低每个时代的学习速度。我正在使用keras。我在运行代码时出错 {Traceback (most recent call last): File "<ipython-input-1-2983b4be581f>", line 1, in <module> runfile('C:/Users/Gehan Mohamed/cnn_learningratescheduler.py', wdir='C:/Users/Gehan Mohamed') Fil

我想降低每个时代的学习速度。我正在使用keras。我在运行代码时出错


{Traceback (most recent call last):

  File "<ipython-input-1-2983b4be581f>", line 1, in <module>
    runfile('C:/Users/Gehan Mohamed/cnn_learningratescheduler.py', wdir='C:/Users/Gehan Mohamed')

  File "C:\Users\Gehan Mohamed\Anaconda3\lib\site-packages\tensorflow_core\python\framework\constant_op.py", line 96, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)

ValueError: Attempt to convert a value (<keras.callbacks.callbacks.LearningRateScheduler object at 0x000001E7C7B8E780>) with an unsupported type (<class 'keras.callbacks.callbacks.LearningRateScheduler'>) to a Tensor.
Attempt to convert a value (<keras.callbacks.callbacks.LearningRateScheduler object at 0x000001E7C7B8E780>) with an unsupported type (<class 'keras.callbacks.callbacks.LearningRateScheduler'>) to a Tensor}. 
如何解决此错误

def step_decay(epochs):
    if epochs <50:
        lrate=0.1
        return lrate
    if epochs >50:
        lrate=0.01
        return lrate            

lrate = LearningRateScheduler(step_decay)
sgd = SGD(lr=lrate, decay=0, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
callbacks_list = [lrate,callback]
filesPath=getFilesPathWithoutSeizure(i, indexPat)
history=model.fit_generator(generate_arrays_for_training(indexPat, filesPath, end=75), 
                                validation_data=generate_arrays_for_training(indexPat, filesPath, start=75),
                                steps_per_epoch=int((len(filesPath)-int(len(filesPath)/100*25))), 
                                validation_steps=int((len(filesPath)-int(len(filesPath)/100*75))),
                                verbose=2,
                                epochs=300, max_queue_size=2, shuffle=True, callbacks=callbacks_list)

在守则的这部分:

lrate = LearningRateScheduler(step_decay)
sgd = SGD(lr=lrate, decay=0, momentum=0.9, nesterov=True)
您将SGD的学习率设置为回调,这是不正确的,您应该将初始学习率设置为SGD:

sgd = SGD(lr=0.01, decay=0, momentum=0.9, nesterov=True)

然后将回调列表传递给model.fit,这可能是您也称作lrate的前一个变量的产物。

您可以按如下所示在每个历元后通过自定义值降低学习率

def scheduler(epoch, lr):
  if epoch < 1:
    return lr
  else:
    return lr * tf.math.exp(-0.1)
现在,让我们从fit方法中调用它

history = model.fit(trainGen, validation_data=valGen, validation_steps=val_split//batch_size, epochs=200, steps_per_epoch= train_split//batch_size, callbacks=[callback])

如上所述,您只需在fit方法中配置初始化的调度程序并运行它。您会注意到,在每个历元之后,学习率都会根据您在“计划程序”功能中设置的值不断降低。

当您的代码中出现错误时,请发布您得到的完整错误回溯。用它编辑你的原始帖子,而不是将其作为评论发布,因为评论的格式很少,很难阅读。我想降低每个时代的学习率。我如何使用SGD optimizer做到这一点,我无法将初始学习率设置为SGD,因为每次学习都会更新epoch@gigi不,这种想法是不正确的,您设置了初始学习速率,LearningRateScheduler回调将跨历元设置学习速率。它不起作用。学习速率不会在每个历元中更新。我在SGD中设置初始学习率=0.1 SGD=SGDlr=0.1,衰减=0,动量=0.9,nesterov=True@gigi您如何确切地检查它是否不起作用?我经常使用这个回调,我知道它工作得很好。只使用了初始学习率,它不会更新每个时代的学习率。
history = model.fit(trainGen, validation_data=valGen, validation_steps=val_split//batch_size, epochs=200, steps_per_epoch= train_split//batch_size, callbacks=[callback])