Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/image-processing/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/jquery-ui/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Image processing 在ResNet50中对pytorch中包含10个类的图像进行分类时遇到此错误。我的代码是:_Image Processing_Deep Learning_Computer Vision_Pytorch_Resnet - Fatal编程技术网

Image processing 在ResNet50中对pytorch中包含10个类的图像进行分类时遇到此错误。我的代码是:

Image processing 在ResNet50中对pytorch中包含10个类的图像进行分类时遇到此错误。我的代码是:,image-processing,deep-learning,computer-vision,pytorch,resnet,Image Processing,Deep Learning,Computer Vision,Pytorch,Resnet,这是我正在实现的代码:我使用CalTech256数据集的子集对10种不同动物的图像进行分类。我们将介绍数据集准备、数据扩充,然后介绍构建分类器的步骤 def train_and_validate(model, loss_criterion, optimizer, epochs=25): ''' Function to train and validate Parameters :param model: Model to train and validat

这是我正在实现的代码:我使用CalTech256数据集的子集对10种不同动物的图像进行分类。我们将介绍数据集准备、数据扩充,然后介绍构建分类器的步骤

def train_and_validate(model, loss_criterion, optimizer, epochs=25):
    '''
    Function to train and validate
    Parameters
        :param model: Model to train and validate
        :param loss_criterion: Loss Criterion to minimize
        :param optimizer: Optimizer for computing gradients
        :param epochs: Number of epochs (default=25)

    Returns
        model: Trained Model with best validation accuracy
        history: (dict object): Having training loss, accuracy and validation loss, accuracy
    '''

    start = time.time()
    history = []
    best_acc = 0.0

    for epoch in range(epochs):
        epoch_start = time.time()
        print("Epoch: {}/{}".format(epoch+1, epochs))

        # Set to training mode
        model.train()

        # Loss and Accuracy within the epoch
        train_loss = 0.0
        train_acc = 0.0

        valid_loss = 0.0
        valid_acc = 0.0

        for i, (inputs, labels) in enumerate(train_data_loader):

            inputs = inputs.to(device)
            labels = labels.to(device)

            # Clean existing gradients
            optimizer.zero_grad()

            # Forward pass - compute outputs on input data using the model
            outputs = model(inputs)

            # Compute loss
            loss = loss_criterion(outputs, labels)

            # Backpropagate the gradients
            loss.backward()

            # Update the parameters
            optimizer.step()

            # Compute the total loss for the batch and add it to train_loss
            train_loss += loss.item() * inputs.size(0)

            # Compute the accuracy
            ret, predictions = torch.max(outputs.data, 1)
            correct_counts = predictions.eq(labels.data.view_as(predictions))

            # Convert correct_counts to float and then compute the mean
            acc = torch.mean(correct_counts.type(torch.FloatTensor))

            # Compute total accuracy in the whole batch and add to train_acc
            train_acc += acc.item() * inputs.size(0)

            #print("Batch number: {:03d}, Training: Loss: {:.4f}, Accuracy: {:.4f}".format(i, loss.item(), acc.item()))


        # Validation - No gradient tracking needed
        with torch.no_grad():

            # Set to evaluation mode
            model.eval()

            # Validation loop
            for j, (inputs, labels) in enumerate(valid_data_loader):
                inputs = inputs.to(device)
                labels = labels.to(device)

                # Forward pass - compute outputs on input data using the model
                outputs = model(inputs)

                # Compute loss
                loss = loss_criterion(outputs, labels)

                # Compute the total loss for the batch and add it to valid_loss
                valid_loss += loss.item() * inputs.size(0)

                # Calculate validation accuracy
                ret, predictions = torch.max(outputs.data, 1)
                correct_counts = predictions.eq(labels.data.view_as(predictions))

                # Convert correct_counts to float and then compute the mean
                acc = torch.mean(correct_counts.type(torch.FloatTensor))

                # Compute total accuracy in the whole batch and add to valid_acc
                valid_acc += acc.item() * inputs.size(0)

                #print("Validation Batch number: {:03d}, Validation: Loss: {:.4f}, Accuracy: {:.4f}".format(j, loss.item(), acc.item()))

        # Find average training loss and training accuracy
        avg_train_loss = train_loss/train_data_size 
        avg_train_acc = train_acc/train_data_size

        # Find average training loss and training accuracy
        avg_valid_loss = valid_loss/valid_data_size 
        avg_valid_acc = valid_acc/valid_data_size

        history.append([avg_train_loss, avg_valid_loss, avg_train_acc, avg_valid_acc])

        epoch_end = time.time()

        print("Epoch : {:03d}, Training: Loss: {:.4f}, Accuracy: {:.4f}%, \n\t\tValidation : Loss : {:.4f}, Accuracy: {:.4f}%, Time: {:.4f}s".format(epoch, avg_train_loss, avg_train_acc*100, avg_valid_loss, avg_valid_acc*100, epoch_end-epoch_start))

        # Save if the model has best accuracy till now
        torch.save(model, dataset+'_model_'+str(epoch)+'.pt')

    return model, history

# Load pretrained ResNet50 Model
resnet50 = models.resnet50(pretrained=True)
#resnet50 = resnet50.to('cuda:0')


# Freeze model parameters
for param in resnet50.parameters():
    param.requires_grad = False
# Change the final layer of ResNet50 Model for Transfer Learning
fc_inputs = resnet50.fc.in_features

resnet50.fc = nn.Sequential(
    nn.Linear(fc_inputs, 256),
    nn.ReLU(),
    nn.Dropout(0.4),
    nn.Linear(256, num_classes), # Since 10 possible outputs
    nn.LogSoftmax(dim=1) # For using NLLLoss()
)

# Convert model to be used on GPU
# resnet50 = resnet50.to('cuda:0')

# Change the final layer of ResNet50 Model for Transfer Learning
fc_inputs = resnet50.fc.in_features

resnet50.fc = nn.Sequential(
    nn.Linear(fc_inputs, 256),
    nn.ReLU(),
    nn.Dropout(0.4),
    nn.Linear(256, num_classes), # Since 10 possible outputs
    nn.LogSoftmax(dienter code herem=1) # For using NLLLoss()
)

# Convert model to be used on GPU
# resnet50 = resnet50.to('cuda:0')`enter code here`
错误是:


运行时错误回溯(最近的调用) 最后)在() 6#培训25个时代的模型 7个纪元=30 ---->8个经过训练的模型,历史=训练和验证(resnet50,loss\u func,optimizer,num\u epochs) 9 10.保存(历史记录,数据集+“”“u history.pt”)

列车内和列车内验证(模型、, 损失(标准、优化器、时代) 43 44#计算损失 --->45损失=损失\标准(输出、标签) 46 47#反向传播梯度

中的~\Anaconda3\lib\site packages\torch\nn\modules\module.py 呼叫(自我,*输入,**kwargs) 539结果=self.\u slow\u forward(*输入,**kwargs) 540其他: -->541结果=自我转发(*输入,**kwargs) 542用于钩住自身。\u向前\u钩住.values(): 543钩子结果=钩子(自身、输入、结果)

中的~\Anaconda3\lib\site packages\torch\nn\modules\loss.py 前进(自我、输入、目标) 202 203 def前进(自身、输入、目标): -->204返回F.nll_损失(输入,目标,重量=自我重量,忽略指数=自我忽略指数,减少=自我减少) 205 206

中的~\Anaconda3\lib\site packages\torch\nn\functional.py nll_损失(输入、目标、重量、大小、平均值、忽略指数、减少、, 缩减)1836.格式(输入.大小(0), target.size(0)))1837如果dim==2: ->1838 ret=torch.\u C.\u nn.nll\u损失(输入,目标,重量,减少。获取枚举(减少),忽略索引)1839 elif dim==4:1840 ret=torch.\u C.\u nn.nll\u损失2d(输入,目标, 权重,减少。获取枚举(减少),忽略索引)

运行时错误:断言'cur\u target>=0&&cur\u target
如果数据集中的标签不正确,或者标签为1索引(而不是0索引),则会发生这种情况。从错误消息来看,
cur\u target
必须小于类的总数(10)。要验证问题,请检查数据集中的最大和最小标签。如果数据确实是1索引的,只需从所有注释中减去1,就可以了

注意,另一个可能的原因是数据中存在一些-1标签。一些(尤其是旧的)数据集使用-1表示错误/可疑的标签。如果你发现这样的标签,就丢弃它们