如何在Pytorch Lightning中禁用进度条
我对Pytorch Lightning中的TQM进度条有很多问题:如何在Pytorch Lightning中禁用进度条,pytorch,tqdm,Pytorch,Tqdm,我对Pytorch Lightning中的TQM进度条有很多问题: 当我在终端上运行培训时,进度条会自动覆盖。在培训历元结束时,验证进度条将打印在培训条下,但当该进度条结束时,下一个培训历元的进度条将打印在上一个历元的进度条上。因此,不可能看到以前时代的损失 INFO:root:Name类型参数 0 l1线性7 K 第二纪元:56%|████████████▊ | 2093/3750[00:05在Trainer中使用命令show_progress_bar=False。F.Y
- 当我在终端上运行培训时,进度条会自动覆盖。在培训历元结束时,验证进度条将打印在培训条下,但当该进度条结束时,下一个培训历元的进度条将打印在上一个历元的进度条上。因此,不可能看到以前时代的损失李>
INFO:root:Name类型参数
0 l1线性7 K
第二纪元:56%|████████████▊ | 2093/3750[00:05在Trainer中使用命令show_progress_bar=False
。F.Y.I.show_progress_bar=False
自0.7.2版以来已弃用,但您可以使用progress_bar\u refresh\u rate=0
我想知道这些问题是否可以解决,或者如何禁用进度条,而只是在屏幕上打印一些日志详细信息
据我所知,这个问题还没有解决。pl团队指出这是“与TQM相关的事情”,他们对此无能为力。也许你想看看
我的临时解决方案是:
from tqdm import tqdm
class LitProgressBar(ProgressBar):
def init_validation_tqdm(self):
bar = tqdm(
disable=True,
)
return bar
bar = LitProgressBar()
trainer = Trainer(callbacks=[bar])
此方法仅禁用验证进度栏,并允许您保留正确的培训栏[和]。请注意,使用progress\u bar\u refresh\u rate=0
将禁用所有进度条的更新。最近,我注意到,在Trainer
中将自定义进度条传递给回调时,将忽略此选项。因此,如果要更改进度条的刷新率,请使用'bar=LitProgressBar(refresh\u rate=your\u refresh\u rate)'而不是1.0.5中的'bar=LitProgressBar()'这不再有效(请参见下面的答案)
INFO:root: Name Type Params
0 l1 Linear 7 K
Epoch 1: 50%|█████ | 1875/3750 [00:05<00:05, 322.34batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 50%|█████ | 1879/3750 [00:05<00:05, 319.41batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 52%|█████▏ | 1942/3750 [00:05<00:04, 374.05batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 53%|█████▎ | 2005/3750 [00:05<00:04, 425.01batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 55%|█████▌ | 2068/3750 [00:05<00:03, 470.56batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 57%|█████▋ | 2131/3750 [00:05<00:03, 507.69batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 59%|█████▊ | 2194/3750 [00:06<00:02, 538.19batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 60%|██████ | 2257/3750 [00:06<00:02, 561.20batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 62%|██████▏ | 2320/3750 [00:06<00:02, 579.22batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 64%|██████▎ | 2383/3750 [00:06<00:02, 591.58batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 65%|██████▌ | 2445/3750 [00:06<00:02, 599.77batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 67%|██████▋ | 2507/3750 [00:06<00:02, 605.00batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 69%|██████▊ | 2569/3750 [00:06<00:01, 607.04batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
Epoch 1: 70%|███████ | 2633/3750 [00:06<00:01, 613.98batch/s, batch_nb=1874, loss=1.534, training_loss=1.72, v_nb=49]
from tqdm import tqdm
class LitProgressBar(ProgressBar):
def init_validation_tqdm(self):
bar = tqdm(
disable=True,
)
return bar
bar = LitProgressBar()
trainer = Trainer(callbacks=[bar])