计算每类测试集的熵,以测量pytorch的不确定度

计算每类测试集的熵,以测量pytorch的不确定度,pytorch,classification,entropy,uncertainty,Pytorch,Classification,Entropy,Uncertainty,我正在尝试使用MC Dropout方法和本链接中提出的解决方案,计算图像分类任务中数据集每一类的熵,以测量pytorch的不确定性 首先,我计算了不同正向过程中每批每个类的平均值(class_mean_batch),然后计算了所有testloader(class_mean)的平均值,然后进行了一些转换以获得(total_mean)以使用它计算熵,如下面的代码所示 def mcdropout_test(batch_size,n_classes,model,T): #set non-dr

我正在尝试使用MC Dropout方法和本链接中提出的解决方案,计算图像分类任务中数据集每一类的熵,以测量pytorch的不确定性

首先,我计算了不同正向过程中每批每个类的平均值(class_mean_batch),然后计算了所有testloader(class_mean)的平均值,然后进行了一些转换以获得(total_mean)以使用它计算熵,如下面的代码所示

def mcdropout_test(batch_size,n_classes,model,T):

    #set non-dropout layers to eval mode
    model.eval()

    #set dropout layers to train mode
    enable_dropout(model)
    
    softmax = nn.Softmax(dim=1)
    classes_mean = []
       
    for images,labels in testloader:
        images = images.to(device)
        labels = labels.to(device)
        classes_mean_batch = []
            
        with torch.no_grad():
          output_list = []
          
          #getting outputs for T forward passes
          for i in range(T):
            output = model(images)
            output = softmax(output)
            output_list.append(torch.unsqueeze(output, 0))
            
        
        concat_output = torch.cat(output_list,0)
        
        # getting mean of each class per batch across multiple MCD forward passes
        for i in range (n_classes):
          mean = torch.mean(concat_output[:, : , i])
          classes_mean_batch.append(mean)
        
        # getting mean of each class for the testloader
        classes_mean.append(torch.stack(classes_mean_batch))
        

    total_mean = []
    concat_classes_mean = torch.stack(classes_mean)

    for i in range (n_classes):
      concat_classes = concat_classes_mean[: , i]
      total_mean.append(concat_classes)


    total_mean = torch.stack(total_mean)
    total_mean = np.asarray(total_mean.cpu())
 
    epsilon = sys.float_info.min
    # Calculating entropy across multiple MCD forward passes 
    entropy = (- np.sum(total_mean*np.log(total_mean + epsilon), axis=-1)).tolist()
    for i in range(n_classes):
      print(f'The uncertainty of class {i+1} is {entropy[i]:.4f}')
    
    
有谁能纠正或确认我用来计算每个类的熵的实现吗