Deep learning 我是否正确实施了学习率查找器的实施?

Deep learning 我是否正确实施了学习率查找器的实施?,deep-learning,computer-vision,pytorch,mnist,fast-ai,Deep Learning,Computer Vision,Pytorch,Mnist,Fast Ai,基于paper的lr_查找器的使用实现 没有学习率查找器: from __future__ import print_function, with_statement, division import torch from tqdm.autonotebook import tqdm from torch.optim.lr_scheduler import _LRScheduler import matplotlib.pyplot as plt import torch import torch.

基于paper的lr_查找器的使用实现

没有学习率查找器:

from __future__ import print_function, with_statement, division
import torch
from tqdm.autonotebook import tqdm
from torch.optim.lr_scheduler import _LRScheduler
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch.utils.data as data_utils
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from matplotlib import pyplot
from pandas import DataFrame
import torchvision.datasets as dset
import os
import torch.nn.functional as F
import time
import random
import pickle
from sklearn.metrics import confusion_matrix
import pandas as pd
import sklearn


class LRFinder(object):
    """Learning rate range test.

    The learning rate range test increases the learning rate in a pre-training run
    between two boundaries in a linear or exponential manner. It provides valuable
    information on how well the network can be trained over a range of learning rates
    and what is the optimal learning rate.

    Arguments:
        model (torch.nn.Module): wrapped model.
        optimizer (torch.optim.Optimizer): wrapped optimizer where the defined learning
            is assumed to be the lower boundary of the range test.
        criterion (torch.nn.Module): wrapped loss function.
        device (str or torch.device, optional): a string ("cpu" or "cuda") with an
            optional ordinal for the device type (e.g. "cuda:X", where is the ordinal).
            Alternatively, can be an object representing the device on which the
            computation will take place. Default: None, uses the same device as `model`.

    Example:
        >>> lr_finder = LRFinder(net, optimizer, criterion, device="cuda")
        >>> lr_finder.range_test(dataloader, end_lr=100, num_iter=100)

    Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186
    fastai/lr_find: https://github.com/fastai/fastai

    """

    def __init__(self, model, optimizer, criterion, device=None):
        self.model = model
        self.optimizer = optimizer
        self.criterion = criterion
        self.history = {"lr": [], "loss": []}
        self.best_loss = None

        # Save the original state of the model and optimizer so they can be restored if
        # needed
        self.model_state = model.state_dict()
        self.model_device = next(self.model.parameters()).device
        self.optimizer_state = optimizer.state_dict()

        # If device is None, use the same as the model
        if device:
            self.device = device
        else:
            self.device = self.model_device

    def reset(self):
        """Restores the model and optimizer to their initial states."""
        self.model.load_state_dict(self.model_state)
        self.model.to(self.model_device)
        self.optimizer.load_state_dict(self.optimizer_state)

    def range_test(
        self,
        train_loader,
        val_loader=None,
        end_lr=10,
        num_iter=100,
        step_mode="exp",
        smooth_f=0.05,
        diverge_th=5,
    ):
        """Performs the learning rate range test.

        Arguments:
            train_loader (torch.utils.data.DataLoader): the training set data laoder.
            val_loader (torch.utils.data.DataLoader, optional): if `None` the range test
                will only use the training loss. When given a data loader, the model is
                evaluated after each iteration on that dataset and the evaluation loss
                is used. Note that in this mode the test takes significantly longer but
                generally produces more precise results. Default: None.
            end_lr (float, optional): the maximum learning rate to test. Default: 10.
            num_iter (int, optional): the number of iterations over which the test
                occurs. Default: 100.
            step_mode (str, optional): one of the available learning rate policies,
                linear or exponential ("linear", "exp"). Default: "exp".
            smooth_f (float, optional): the loss smoothing factor within the [0, 1[
                interval. Disabled if set to 0, otherwise the loss is smoothed using
                exponential smoothing. Default: 0.05.
            diverge_th (int, optional): the test is stopped when the loss surpasses the
                threshold:  diverge_th * best_loss. Default: 5.

        """
        # Reset test results
        self.history = {"lr": [], "loss": []}
        self.best_loss = None

        # Move the model to the proper device
        self.model.to(self.device)

        # Initialize the proper learning rate policy
        if step_mode.lower() == "exp":
            lr_schedule = ExponentialLR(self.optimizer, end_lr, num_iter)
        elif step_mode.lower() == "linear":
            lr_schedule = LinearLR(self.optimizer, end_lr, num_iter)
        else:
            raise ValueError("expected one of (exp, linear), got {}".format(step_mode))

        if smooth_f < 0 or smooth_f >= 1:
            raise ValueError("smooth_f is outside the range [0, 1[")

        # Create an iterator to get data batch by batch
        iterator = iter(train_loader)
        for iteration in tqdm(range(num_iter)):
            # Get a new set of inputs and labels
            try:
                inputs, labels = next(iterator)
            except StopIteration:
                iterator = iter(train_loader)
                inputs, labels = next(iterator)

            # Train on batch and retrieve loss
            loss = self._train_batch(inputs, labels)
            if val_loader:
                loss = self._validate(val_loader)

            # Update the learning rate
            lr_schedule.step()
            self.history["lr"].append(lr_schedule.get_lr()[0])

            # Track the best loss and smooth it if smooth_f is specified
            if iteration == 0:
                self.best_loss = loss
            else:
                if smooth_f > 0:
                    loss = smooth_f * loss + (1 - smooth_f) * self.history["loss"][-1]
                if loss < self.best_loss:
                    self.best_loss = loss

            # Check if the loss has diverged; if it has, stop the test
            self.history["loss"].append(loss)
            if loss > diverge_th * self.best_loss:
                print("Stopping early, the loss has diverged")
                break

        print("Learning rate search finished. See the graph with {finder_name}.plot()")

    def _train_batch(self, inputs, labels):
        # Set model to training mode
#         self.model.train()

        # Move data to the correct device
        inputs = inputs.to(self.device)
        labels = labels.to(self.device)

        # Forward pass
        self.optimizer.zero_grad()
        outputs = self.model(inputs)
        loss = self.criterion(outputs, labels)

        # Backward pass
        loss.backward()
        self.optimizer.step()

        return loss.item()

    def _validate(self, dataloader):
        # Set model to evaluation mode and disable gradient computation
        running_loss = 0
        self.model.eval()
        with torch.no_grad():
            for inputs, labels in dataloader:
                # Move data to the correct device
                inputs = inputs.to(self.device)
                labels = labels.to(self.device)

                # Forward pass and loss computation
                outputs = self.model(inputs)
                loss = self.criterion(outputs, labels)
                running_loss += loss.item() * inputs.size(0)

        return running_loss / len(dataloader.dataset)

    def plot(self, skip_start=10, skip_end=5, log_lr=True):
        """Plots the learning rate range test.

        Arguments:
            skip_start (int, optional): number of batches to trim from the start.
                Default: 10.
            skip_end (int, optional): number of batches to trim from the start.
                Default: 5.
            log_lr (bool, optional): True to plot the learning rate in a logarithmic
                scale; otherwise, plotted in a linear scale. Default: True.

        """

        if skip_start < 0:
            raise ValueError("skip_start cannot be negative")
        if skip_end < 0:
            raise ValueError("skip_end cannot be negative")

        # Get the data to plot from the history dictionary. Also, handle skip_end=0
        # properly so the behaviour is the expected
        lrs = self.history["lr"]
        losses = self.history["loss"]
        if skip_end == 0:
            lrs = lrs[skip_start:]
            losses = losses[skip_start:]
        else:
            lrs = lrs[skip_start:-skip_end]
            losses = losses[skip_start:-skip_end]

        # Plot loss as a function of the learning rate
        plt.plot(lrs, losses)
        if log_lr:
            plt.xscale("log")
        plt.xlabel("Learning rate")
        plt.ylabel("Loss")
        plt.show()


class LinearLR(_LRScheduler):
    """Linearly increases the learning rate between two boundaries over a number of
    iterations.

    Arguments:
        optimizer (torch.optim.Optimizer): wrapped optimizer.
        end_lr (float, optional): the initial learning rate which is the lower
            boundary of the test. Default: 10.
        num_iter (int, optional): the number of iterations over which the test
            occurs. Default: 100.
        last_epoch (int): the index of last epoch. Default: -1.

    """

    def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1):
        self.end_lr = end_lr
        self.num_iter = num_iter
        super(LinearLR, self).__init__(optimizer, last_epoch)

    def get_lr(self):
        curr_iter = self.last_epoch + 1
        r = curr_iter / self.num_iter
        return [base_lr + r * (self.end_lr - base_lr) for base_lr in self.base_lrs]


class ExponentialLR(_LRScheduler):
    """Exponentially increases the learning rate between two boundaries over a number of
    iterations.

    Arguments:
        optimizer (torch.optim.Optimizer): wrapped optimizer.
        end_lr (float, optional): the initial learning rate which is the lower
            boundary of the test. Default: 10.
        num_iter (int, optional): the number of iterations over which the test
            occurs. Default: 100.
        last_epoch (int): the index of last epoch. Default: -1.

    """

    def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1):
        self.end_lr = end_lr
        self.num_iter = num_iter
        super(ExponentialLR, self).__init__(optimizer, last_epoch)

    def get_lr(self):
        curr_iter = self.last_epoch + 1
        r = curr_iter / self.num_iter
        return [base_lr * (self.end_lr / base_lr) ** r for base_lr in self.base_lrs]

trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])

root = './data'
if not os.path.exists(root):
    os.mkdir(root)
train_set = dset.MNIST(root=root, train=True, transform=trans, download=True)
test_set = dset.MNIST(root=root, train=False, transform=trans, download=True)

batch_size = 64

train_loader = torch.utils.data.DataLoader(
                 dataset=train_set,
                 batch_size=batch_size,
                 shuffle=True)

test_loader = torch.utils.data.DataLoader(
                dataset=test_set,
                batch_size=batch_size,
shuffle=True)

class NeuralNet(nn.Module):
    def __init__(self):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(28*28, 500)
        self.fc2 = nn.Linear(500, 256)
        self.fc3 = nn.Linear(256, 10)
    def forward(self, x):
        x = x.view(-1, 28*28)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

num_epochs = 2
random_sample_size = 200


# Hyper-parameters 
input_size = 100
hidden_size = 100
num_classes = 10
learning_rate = .0001
# Device configuration
device = 'cpu'

model = NeuralNet().to(device)

# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)  
# lr_finder = LRFinder(model, optimizer, criterion, device="cpu")
# lr_finder.range_test(train_loader, end_lr=100, num_iter=100)
# lr_finder.plot()
# optimizer = torch.optim.Adam(model.parameters(), lr=lr_finder.history['lr'][0])  
# print(lr_finder.history['lr'])

predicted_test = []
labels_l = []
actual_values = []
predicted_values = []

N = len(train_loader)
# Train the model
total_step = len(train_loader)

for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):  
        # Move tensors to the configured device
#         images = images.reshape(-1, 50176).to(device)
        images = images.to(device)
        labels = labels.to(device)

        # Forward pass
        outputs = model(images)
        predicted = outputs.data.max(1)[1]
        predicted_test.append(predicted.cpu().numpy())
        labels_l.append(labels.cpu().numpy())

        loss = criterion(outputs, labels)
        # Backward and optimize
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

    predicted_values.append(np.concatenate(predicted_test).ravel())
    actual_values.append(np.concatenate(labels_l).ravel())

    print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
    print('training accuracy : ', 100 * len((np.where(np.array(predicted_values[0])==(np.array(actual_values[0])))[0])) / len(actual_values[0]))
在未注释学习率查找器代码的情况下:

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)  
lr_finder = LRFinder(model, optimizer, criterion, device="cpu")
lr_finder.range_test(train_loader, end_lr=100, num_iter=100)
lr_finder.plot()
optimizer = torch.optim.Adam(model.parameters(), lr=lr_finder.history['lr'][0])  
print(lr_finder.history['lr'])
以下注释掉的代码现在未被取消注释:

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)  
lr_finder = LRFinder(model, optimizer, criterion, device="cpu")
lr_finder.range_test(train_loader, end_lr=100, num_iter=100)
lr_finder.plot()
optimizer = torch.optim.Adam(model.parameters(), lr=lr_finder.history['lr'][0])  
print(lr_finder.history['lr'])
该模型在两个阶段后实现结果:

Epoch [1/2], Step [938/938], Loss: 3.7311
training accuracy :  9.93
Epoch [2/2], Step [938/938], Loss: 3.5106
training accuracy :  9.93

可以看出,与
9.93
相比,训练精度要低得多
84.09833333
。learning rate finder是否应该找到一个允许获得更高训练集准确性的学习率?

代码似乎正确地使用了实现。要回答你的最后一个问题

可以看出,训练准确率比9.93低84.09833333。学习率查找器是否应该找到一个能够实现更高训练集准确性的学习率

不是真的。几点

  • 您正在使用Adam,它为网络中的每个参数自适应地缩放学习速率。例如,与传统的SGD相比,初始学习率的重要性更小。《亚当》的原始作者写道

    超参数具有直观的解释,通常需要很少的调整。[1]

  • 良好的学习速度应能使您的网络更快地融合(即在更少的时代)。它可能仍然会找到与更高学习率相同的局部极小值,但速度更快。学习率过高的风险在于,你会超出局部极小值,而找到一个较差的极小值。如果学习率很低,你应该获得最佳的训练精度,但这需要很长时间

  • 你只训练了两个时代的模特。如果我不得不猜测的话,该算法已经发现,较小的学习速率会导致良好的最优解,但由于它很小,因此需要更多的时间来收敛。为了验证这一理论,我建议你延长训练时间

  • 总之,您最好使用带有默认参数的Adam,并将注意力转移到其他地方,例如建模选择(层、节点、激活等)。根据我的经验,标准Adam在大多数情况下都非常有效


    [1]

    代码似乎正确使用了实现。要回答你的最后一个问题

    可以看出,训练准确率比9.93低84.09833333。学习率查找器是否应该找到一个能够实现更高训练集准确性的学习率

    不是真的。几点

  • 您正在使用Adam,它为网络中的每个参数自适应地缩放学习速率。例如,与传统的SGD相比,初始学习率的重要性更小。《亚当》的原始作者写道

    超参数具有直观的解释,通常需要很少的调整。[1]

  • 良好的学习速度应能使您的网络更快地融合(即在更少的时代)。它可能仍然会找到与更高学习率相同的局部极小值,但速度更快。学习率过高的风险在于,你会超出局部极小值,而找到一个较差的极小值。如果学习率很低,你应该获得最佳的训练精度,但这需要很长时间

  • 你只训练了两个时代的模特。如果我不得不猜测的话,该算法已经发现,较小的学习速率会导致良好的最优解,但由于它很小,因此需要更多的时间来收敛。为了验证这一理论,我建议你延长训练时间

  • 总之,您最好使用带有默认参数的Adam,并将注意力转移到其他地方,例如建模选择(层、节点、激活等)。根据我的经验,标准Adam在大多数情况下都非常有效

    [1]