Neural network PyTorch迭代修剪

Neural network PyTorch迭代修剪,neural-network,pytorch,Neural Network,Pytorch,根据教程,我一直在尝试使用“`torch.nn.utils.prune”实现修剪。出于实验目的,我使用LeNet-300-100稠密神经网络和可以访问的MNIST数据集 然后,我使用以下代码片段训练模型: # Training loop- for epoch in range(num_epochs): running_loss = 0.0 running_corrects = 0.0 if loc_patience >= patience:

根据教程,我一直在尝试使用“`torch.nn.utils.prune”实现修剪。出于实验目的,我使用LeNet-300-100稠密神经网络和可以访问的MNIST数据集

然后,我使用以下代码片段训练模型:

# Training loop-
for epoch in range(num_epochs):
    running_loss = 0.0
    running_corrects = 0.0
    
    if loc_patience >= patience:
        print("\n'EarlyStopping' called!\n")
        break

    running_loss, running_corrects = train_model(best_model, train_loader)
  
    epoch_loss = running_loss / len(train_dataset)
    epoch_acc = running_corrects.double() / len(train_dataset)
    # epoch_acc = 100 * running_corrects / len(trainset)
    # print(f"\nepoch: {epoch + 1} training loss = {epoch_loss:.4f}, training accuracy = {epoch_acc * 100:.2f}%\n")

    running_loss_val, correct, total = test_model(best_model, test_loader)

    epoch_val_loss = running_loss_val / len(test_dataset)
    val_acc = 100 * (correct / total)
    # print(f"\nepoch: {epoch + 1} training loss = {epoch_loss:.4f}, training accuracy = {epoch_acc * 100:.2f}%, val_loss = {epoch_val_loss:.4f} & val_accuracy = {val_acc:.2f}%\n")

    print(f"\nepoch: {epoch + 1} training loss = {epoch_loss:.4f}, training accuracy = {epoch_acc * 100:.2f}%, val_loss = {epoch_val_loss:.4f} & val_accuracy = {val_acc:.2f}%")

    curr_params = count_params(best_model)
    print(f"Number of parameters = {curr_params}\n")
    
    percentage_pruned = ((orig_params - curr_params.numpy()) / orig_params * 100).numpy()
    
    # Code for manual Early Stopping:
    # if np.abs(epoch_val_loss < best_val_loss) >= minimum_delta:
    if (epoch_val_loss < best_val_loss) and np.abs(epoch_val_loss - best_val_loss) >= minimum_delta:
        # print(f"epoch_val_loss = {epoch_val_loss:.4f}, best_val_loss = {best_val_loss:.4f}")
        
        # update 'best_val_loss' variable to lowest loss encountered so far-
        best_val_loss = epoch_val_loss
        
        # reset 'loc_patience' variable-
        loc_patience = 0
        
        print(f"\nSaving model with lowest val_loss = {epoch_val_loss:.4f}")
        
        # Save trained model with validation accuracy-
        # torch.save(model.state_dict, f"LeNet-300-100_Trained_{val_acc}.pth")
        torch.save(best_model.state_dict(), f"LeNet-300-100_{percentage_pruned:.2f}.pth")
        
    else:  # there is no improvement in monitored metric 'val_loss'
        loc_patience += 1  # number of epochs without any improvement
#训练循环-
对于范围内的历元(num_历元):
运行损耗=0.0
运行_校正=0.0
如果loc_耐心>=耐心:
打印(“\n'earlysting'调用!\n”)
打破
运行损耗,运行修正=列车模型(最佳模型,列车装载机)
历元损耗=运行损耗/长度(列车数据集)
epoch\u acc=运行\u纠正.double()/len(列车数据集)
#epoch\u acc=100*运行\u校正/长度(列车组)
#打印(f“\nepoch:{epoch+1}训练损失={epoch\u损失:.4f},训练精度={epoch\u acc*100:.2f}%\n”)
运行损耗值,正确,总计=测试模型(最佳模型,测试加载程序)
历元值损失=运行损失值/长度(测试数据集)
val_acc=100*(正确/总计)
#打印(f“\nepoch:{epoch+1}训练损失={epoch_损失:.4f},训练精度={epoch_acc*100:.2f}%,val_损失={epoch_val_损失:.4f}和val_精度={val_acc:.2f}%\n”)
打印(f“\nepoch:{epoch+1}训练损失={epoch_损失:.4f},训练精度={epoch_acc*100:.2f}%,val_损失={epoch_val_损失:.4f}和val_精度={val_acc:.2f}%”)
当前参数=计数参数(最佳模型)
打印(f“参数数={curr\u params}\n”)
已修剪百分比=((原始参数-curr\u params.numpy())/orig\u params*100).numpy()
#手动提前停止代码:
#如果np.abs(历元价值损失<最佳价值损失)>=最小价值增量:
如果(历元价值损失<最佳价值损失)和np.abs(历元价值损失-最佳价值损失)>=最小增量:
#打印(f“epoch_val_loss={epoch_val_loss:.4f},best_val_loss={best_val_loss:.4f}”)
#将“best_val_loss”变量更新为迄今为止遇到的最低损失-
最佳价值损失=时代价值损失
#重置“loc_耐心”变量-
loc_耐心=0
打印(f“\n保存具有最低val_损失的模型={epoch_val_损失:.4f}”)
#保存经过训练的模型,确保验证的准确性-
#火炬保存(模型状态,f“LeNet-300-100”{val_acc}.pth”)
保存(最佳模型状态dict(),f“LeNet-300-100{percentage\u pruned:.2f}.pth”)
否则:#监控指标“val_损失”没有改善
loc_patience+=1#没有任何改进的历代次数
输出为:

历元:1训练损失=0.0266,训练准确率=99.11%,val_损失=0.0980,val_准确率=97.94% 参数数量=266610

最低val_损失的储蓄模型=0.0980

历元:2次训练损失=0.0266,训练准确率=99.11%,val_损失=0.0980,val_准确率=97.94% 参数数量=266610

历元:3次训练损失=0.0266,训练准确率=99.11%,val_损失=0.0980,val_准确率=97.94% 参数数量=266610

历元:4次训练损失=0.0266,训练准确率=99.11%,val_损失=0.0980,val_准确率=97.94% 参数数量=266610

历元:5次训练损失=0.0266,训练准确率=99.11%,val_损失=0.0980,val_准确率=97.94% 参数数量=266610

历元:6训练损失=0.0266,训练准确率=99.11%,val_损失=0.0980,val_准确率=97.94% 参数数量=266610

“早起”喊道

这表明:

  • 模型被“冻结”。它没有学到任何东西,因为和val_精度和val_损失值保持不变

  • 参数的数量保持不变,因此,define修剪似乎没有任何效果

  • 我怎样才能修好它们

    # Training loop-
    for epoch in range(num_epochs):
        running_loss = 0.0
        running_corrects = 0.0
        
        if loc_patience >= patience:
            print("\n'EarlyStopping' called!\n")
            break
    
        running_loss, running_corrects = train_model(best_model, train_loader)
      
        epoch_loss = running_loss / len(train_dataset)
        epoch_acc = running_corrects.double() / len(train_dataset)
        # epoch_acc = 100 * running_corrects / len(trainset)
        # print(f"\nepoch: {epoch + 1} training loss = {epoch_loss:.4f}, training accuracy = {epoch_acc * 100:.2f}%\n")
    
        running_loss_val, correct, total = test_model(best_model, test_loader)
    
        epoch_val_loss = running_loss_val / len(test_dataset)
        val_acc = 100 * (correct / total)
        # print(f"\nepoch: {epoch + 1} training loss = {epoch_loss:.4f}, training accuracy = {epoch_acc * 100:.2f}%, val_loss = {epoch_val_loss:.4f} & val_accuracy = {val_acc:.2f}%\n")
    
        print(f"\nepoch: {epoch + 1} training loss = {epoch_loss:.4f}, training accuracy = {epoch_acc * 100:.2f}%, val_loss = {epoch_val_loss:.4f} & val_accuracy = {val_acc:.2f}%")
    
        curr_params = count_params(best_model)
        print(f"Number of parameters = {curr_params}\n")
        
        percentage_pruned = ((orig_params - curr_params.numpy()) / orig_params * 100).numpy()
        
        # Code for manual Early Stopping:
        # if np.abs(epoch_val_loss < best_val_loss) >= minimum_delta:
        if (epoch_val_loss < best_val_loss) and np.abs(epoch_val_loss - best_val_loss) >= minimum_delta:
            # print(f"epoch_val_loss = {epoch_val_loss:.4f}, best_val_loss = {best_val_loss:.4f}")
            
            # update 'best_val_loss' variable to lowest loss encountered so far-
            best_val_loss = epoch_val_loss
            
            # reset 'loc_patience' variable-
            loc_patience = 0
            
            print(f"\nSaving model with lowest val_loss = {epoch_val_loss:.4f}")
            
            # Save trained model with validation accuracy-
            # torch.save(model.state_dict, f"LeNet-300-100_Trained_{val_acc}.pth")
            torch.save(best_model.state_dict(), f"LeNet-300-100_{percentage_pruned:.2f}.pth")
            
        else:  # there is no improvement in monitored metric 'val_loss'
            loc_patience += 1  # number of epochs without any improvement