Python 使用来自谷歌云存储的数据在谷歌计算引擎上培训Resnet时的可变性能
我正在尝试从标记的Google Streeview数据中对大量作物训练resnet18图像分类器。我跟在后面。我有两个数据集,一个约20k图像,一个约100k图像。这两个数据集以相同的格式存储,并且都已上载到各自的Google云存储桶中。然后,我使用带有Python 使用来自谷歌云存储的数据在谷歌计算引擎上培训Resnet时的可变性能,python,google-cloud-storage,google-compute-engine,pytorch,gcsfuse,Python,Google Cloud Storage,Google Compute Engine,Pytorch,Gcsfuse,我正在尝试从标记的Google Streeview数据中对大量作物训练resnet18图像分类器。我跟在后面。我有两个数据集,一个约20k图像,一个约100k图像。这两个数据集以相同的格式存储,并且都已上载到各自的Google云存储桶中。然后,我使用带有--implicit dirs标志的gcsfuse将这两个bucket装载到VM的主目录中 然后,我在我的Google Compute Engine VM上运行我的train.py文件,它是从Google的云市场上创建的。虚拟机有一个vCPU、一
--implicit dirs
标志的gcsfuse
将这两个bucket装载到VM的主目录中
然后,我在我的Google Compute Engine VM上运行我的train.py
文件,它是从Google的云市场上创建的。虚拟机有一个vCPU、一个Nvidia Tesla K80 GPU、3.75gb内存和一个100gb永久磁盘
当我运行培训脚本时,除了将dataset_dir变量指向VM上正确的gcsfuse
-挂载目录外,我不做任何更改
当我在100k crops目录上运行train.py
时,它运行得相对较快,单个历元大约需要30分钟。当它运行时,我跳入top
,CPU利用率相当高,保持在90%左右
但是,使用相同的VM,当我在20K crops目录上运行train.py
时,运行速度要慢得多,一个历元需要6-7个小时,尽管数据集的大小较小。在这种情况下,CPU利用率从未超过5%
我不知道是什么原因导致了减速,因为两次运行之间没有什么不同(据我所知),除了数据集,它们的格式都相同。我使用相同的pytorch
dataloader和相同数量的线程。两个GCS存储桶位于同一区域,us-west1
,这与我的VM实例位于同一区域
似乎有可能一个桶相对于另一个桶受到IO限制,但我不知道为什么
任何想法都很感激
下面是我的train.py
文件
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
from collections import defaultdict
data_transforms = {
'Test': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'Val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = 'home/gweld/sliding_window_dataset/'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['Test', 'Val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['Test', 'Val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['Test', 'Val']}
class_names = image_datasets['Test'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['Test', 'Val']:
if phase == 'Test':
scheduler.step()
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
class_corrects = defaultdict(int)
class_totals = defaultdict(int)
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'Test'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'Test':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
for index, pred in enumerate(preds):
actual = labels.data[index]
class_name = class_names[actual]
if actual == pred: class_corrects[class_name] += 1
class_totals[class_name] += 1
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
if phase == 'Val':
print("Validation Class Accuracies")
for class_name in class_totals:
class_acc = float(class_corrects[class_name])
class_acc = class_acc/class_totals[class_name]
print("{:20}{}%".format(class_name, 100*class_acc))
print("\n")
# deep copy the model
if phase == 'Val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 5) # last arg here, # classes? -gw
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
# Train and evaluate
# ^^^^^^^^^^^^^^^^^^
print('Beginning Training on {} train and {} val images.'.format(dataset_sizes['Test'], dataset_sizes['Val']))
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=25)
torch.save(model_ft.state_dict(), 'models/test_run_resnet18.pt')
两个数据集在同一个存储桶中?如果不选中gsutil ls-L-b gs://| grep Storage。如果您在两个存储桶上运行相同的CPU/GPU ML进程,并且其中一个进程的速度要慢得多,那么很有可能是某种原因导致了两个存储桶之间的IO性能不同。慢的是使用近线/冷线的吗?是否启用了fast?FUSE文件系统的安装/配置方式有什么不同吗?