Python PyTorch-next(iter(training#u-loader))速度极慢,数据简单,可以';有多少工人?

Python PyTorch-next(iter(training#u-loader))速度极慢,数据简单,可以';有多少工人?,python,performance,machine-learning,iterator,pytorch,Python,Performance,Machine Learning,Iterator,Pytorch,这里的x_-dat和y_-dat都是非常长的一维张量 class FunctionDataset(Dataset): def __init__(self): x_dat, y_dat = data_product() self.length = len(x_dat) self.y_dat = y_dat self.x_dat = x_dat def __getitem__(self, index):

这里的
x_-dat
y_-dat
都是非常长的一维张量

class FunctionDataset(Dataset):
    def __init__(self):
        x_dat, y_dat = data_product()

        self.length = len(x_dat)
        self.y_dat = y_dat
        self.x_dat = x_dat

    def __getitem__(self, index):
        sample = self.x_dat[index]
        label = self.y_dat[index]
        return sample, label

    def __len__(self):
        return self.length

...

data_set = FunctionDataset()

...

training_sampler = SubsetRandomSampler(train_indices)
validation_sampler = SubsetRandomSampler(validation_indices)

training_loader = DataLoader(data_set, sampler=training_sampler, batch_size=params['batch_size'], shuffle=False)
validation_loader = DataLoader(data_set, sampler=validation_sampler, batch_size=valid_size, shuffle=False)
我还尝试了固定两个装载机的内存。将
num_workers
设置为>0会在进程之间产生运行时错误(如EOF错误和中断错误)。我通过以下方式获得我的批次:

x_val, target = next(iter(training_loader))
整个数据集都可以放入内存/gpu中,但我想为这个实验模拟批处理。分析我的流程可以提供以下信息:

16276989 function calls (16254744 primitive calls) in 38.779 seconds

   Ordered by: cumulative time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
   1745/1    0.028    0.000   38.780   38.780 {built-in method builtins.exec}
        1    0.052    0.052   38.780   38.780 simple aprox.py:3(<module>)
        1    0.000    0.000   36.900   36.900 simple aprox.py:519(exploreHeatmap)
        1    0.000    0.000   36.900   36.900 simple aprox.py:497(optFromSample)
        1    0.033    0.033   36.900   36.900 simple aprox.py:274(train)
  705/483    0.001    0.000   34.495    0.071 {built-in method builtins.next}
      222    1.525    0.007   34.493    0.155 dataloader.py:311(__next__)
      222    0.851    0.004   12.752    0.057 dataloader.py:314(<listcomp>)
  3016001   11.901    0.000   11.901    0.000 simple aprox.py:176(__getitem__)
       21    0.010    0.000   10.891    0.519 simple aprox.py:413(validationError)
      443    1.380    0.003    9.664    0.022 sampler.py:136(__iter__)
  663/221    2.209    0.003    8.652    0.039 dataloader.py:151(default_collate)
      221    0.070    0.000    6.441    0.029 dataloader.py:187(<listcomp>)
      442    6.369    0.014    6.369    0.014 {built-in method stack}
  3060221    2.799    0.000    5.890    0.000 sampler.py:68(<genexpr>)
  3060000    3.091    0.000    3.091    0.000 tensor.py:382(<lambda>)
      222    0.001    0.000    1.985    0.009 sampler.py:67(__iter__)
      222    1.982    0.009    1.982    0.009 {built-in method randperm}
  663/221    0.002    0.000    1.901    0.009 dataloader.py:192(pin_memory_batch)
      221    0.000    0.000    1.899    0.009 dataloader.py:200(<listcomp>)
....
16276989函数调用(16254744原语调用)在38.779秒内完成
排序人:累计时间
ncalls tottime percall cumtime percall文件名:lineno(函数)
1745/1 0.028 0.000 38.780 38.780{内置方法builtins.exec}
1 0.052 0.052 38.780 38.780简单近似值py:3()
1 0.000 0.000 36.900 36.900简单近似值py:519(探索地图)
1 0.000 0.000 36.900 36.900简单近似值py:497(选自样本)
1 0.033 0.033 36.900 36.900简单近似值py:274(列车)
705/483 0.001 0.000 34.495 0.071{内置方法内置。下一步}
222 1.525 0.007 34.493 0.155数据加载器。py:311(下一个)
2220.851 0.004 12.752 0.057数据加载器。py:314()
301601 11.901 0.000 11.901 0.000简单近似值:176
21 0.010 0.000 10.891 0.519简单近似值py:413(验证错误)
443 1.380 0.003 9.664 0.022取样器py:136
663/221 2.209 0.003 8.652 0.039数据加载器。py:151(默认值)
2210.070.000 6.441 0.029数据加载器。py:187()
4426.369 0.014 6.369 0.014{内置方法堆栈}
3060221 2.799 0.000 5.890 0 0.000取样器。py:68()
3060000 3.091 0.000 3.091 0.000张量py:382()
2220.001 0.000 1.985 0.009取样器py:67
222 1.982 0.009 1.982 0.009{内置方法randperm}
663/221 0.002 0.000 1.901 0.009数据加载器。py:192(引脚内存批)
221 0.000 0.000 1.899 0.009数据加载器。py:200()
....

与我实验的剩余活动(训练模型和大量其他计算等)相比,数据加载器的速度非常慢。出现了什么问题?加快速度的最佳方法是什么?

在检索具有

x, y = next(iter(training_loader))
实际上,每次调用时都会创建一个新的dataloader迭代器实例(!),有关更多信息,请参阅。
您应该做的是创建一次迭代器(每个历元):

然后在迭代器上为每个批调用
next

for i in range(num_batches_in_epoch):
  x, y = next(training_loader_iter)
我以前也遇到过类似的问题,这也使得您在使用多个worker时遇到的EOF错误消失了

for i in range(num_batches_in_epoch):
  x, y = next(training_loader_iter)