Python 错误:_pickle.PicklingError:Can';t pickle<;功能<;lambda>;在0x0000002F2175B048>;处:属性查找<;lambda>;在主机上失败

Python 错误:_pickle.PicklingError:Can';t pickle<;功能<;lambda>;在0x0000002F2175B048>;处:属性查找<;lambda>;在主机上失败,python,deep-learning,torchvision,Python,Deep Learning,Torchvision,我正在尝试运行以下报告与其他用户运行良好的代码,但我发现了此错误 --编码:utf-8-- 进口这些东西 步骤1:从日志文件中读取 步骤2:将数据分为训练集和验证集 步骤3a:为dataloader定义扩充、转换过程、参数和数据集 步骤4:定义网络 类NetworkDense(nn.Module): 步骤5:定义优化器 步骤6:检查设备并定义将张量移动到该设备的函数 第7步:根据定义的最大时段对网络进行培训和验证 步骤8:定义状态并将模型wrt保存到状态 这是错误消息: "D:\VIC

我正在尝试运行以下报告与其他用户运行良好的代码,但我发现了此错误

--编码:utf-8-- 进口这些东西 步骤1:从日志文件中读取 步骤2:将数据分为训练集和验证集 步骤3a:为dataloader定义扩充、转换过程、参数和数据集 步骤4:定义网络 类NetworkDense(nn.Module):

步骤5:定义优化器 步骤6:检查设备并定义将张量移动到该设备的函数 第7步:根据定义的最大时段对网络进行培训和验证 步骤8:定义状态并将模型wrt保存到状态 这是错误消息:

"D:\VICO\Back up\venv\Scripts\python.exe" "D:/VICO/Back up/venv/Scripts/self_driving_car.py"
device is:  cpu
Traceback (most recent call last):
  File "D:/VICO/Back up/venv/Scripts/self_driving_car.py", line 163, in <module>
    for local_batch, (centers, lefts, rights) in enumerate(training_generator):
  File "D:\VICO\Back up\venv\lib\site-packages\torch\utils\data\dataloader.py", line 291, in __iter__
    return _MultiProcessingDataLoaderIter(self)
  File "D:\VICO\Back up\venv\lib\site-packages\torch\utils\data\dataloader.py", line 737, in __init__
    w.start()
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x0000002F2175B048>: attribute lookup <lambda> on __main__ failed

Process finished with exit code 1
“D:\VICO\backup\venv\Scripts\python.exe”“D:/VICO/backup/venv/Scripts/self\u driving\u car.py”
设备是:cpu
回溯(最近一次呼叫最后一次):
文件“D:/VICO/backup/venv/Scripts/self_driving_car.py”,第163行,in
对于enumerate(training_generator)中的本地_批次(中心、左侧、右侧):
文件“D:\VICO\backup\venv\lib\site packages\torch\utils\data\dataloader.py”,第291行,在iter中__
return\u多处理数据装入器(self)
文件“D:\VICO\backup\venv\lib\site packages\torch\utils\data\dataloader.py”,第737行,在\uu init中__
w、 开始()
文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py”,第112行,在开始处
self.\u popen=self.\u popen(self)
文件“C:\Users\isonata\AppData\Local\Programs\Python\37\lib\multiprocessing\context.py”,第223行,在
返回_default_context.get_context().Process._Popen(Process_obj)
文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py”,第322行,在
返回Popen(过程对象)
文件“C:\Users\isonata\AppData\Local\Programs\Python\37\lib\multiprocessing\popen\u spawn\u win32.py”,第89行,在\uuu init中__
减少.转储(进程对象,到子进程)
文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduce.py”,第60行,转储文件
ForkingPickler(文件、协议).dump(obj)
_pickle.PicklingError:无法pickle:在_main上查找属性失败
进程已完成,退出代码为1

我不确定解决问题的下一步是什么。

pickle
不会pickle函数对象。它希望通过导入其模块并查找其名称来找到函数对象。lambda是匿名函数(没有名字),所以这不起作用。解决方案是在模块级别命名函数。我在你的代码中找到的唯一lambda是

transformations = transforms.Compose([transforms.Lambda(lambda x: (x / 255.0) - 0.5)])
假设这是一个麻烦的函数,您可以

def _my_normalization(x):
    return x/255.0 - 0.5

transformations = transforms.Compose([transforms.Lambda(_my_normalization])

您可能会遇到其他问题,因为看起来您正在模块级工作。如果这是一个多处理问题,并且您正在windows上运行,那么新进程将导入文件并再次运行所有模块级代码。这在linux/mac上不是问题,因为分叉进程已经从父进程加载了模块。

使用
state={'model':model.state\u dict()}
,然后是
model.load\u state\u dict(…)
加载时是的,实际上我在多处理方面有另一个问题。“RuntimeError:在当前进程完成引导阶段之前,尝试启动新进程。这可能意味着您没有使用fork启动子进程,并且忘记在主模块中使用正确的习惯用法:if name==“main”:freeze_support()。。。如果程序不会被冻结以生成可执行文件,则可以省略“freeze_support()”行。嗨,tdelaney,有什么办法解决这个问题吗?thanksIt听起来像是在windows上运行。在windows中,模块在子进程中重新导入。模块级的任何内容都在子进程中运行,在您的情况下,这些子进程包括创建子进程的代码,从而导致无限多进程的产生。如果在主脚本中使用,则使用
。只需搜索“RuntimeError:已尝试启动一个新进程”,您将获得100次点击。感谢tdelaney,我在步骤7中输入if name==“main”:和main(),一切正常。
def augment(imgName, angle):
  name = 'data/IMG/' + imgName.split('/')[-1]
  current_image = cv2.imread(name)
  current_image = current_image[65:-25, :, :]
  if np.random.rand() < 0.5:
    current_image = cv2.flip(current_image, 1)
    angle = angle * -1.0  
  return current_image, angle

class Dataset(data.Dataset):

    def __init__(self, samples, transform=None):

        self.samples = samples
        self.transform = transform

    def __getitem__(self, index):
      
        batch_samples = self.samples[index]
        
        steering_angle = float(batch_samples[3])
        
        center_img, steering_angle_center = augment(batch_samples[0], steering_angle)
        left_img, steering_angle_left = augment(batch_samples[1], steering_angle + 0.4)
        right_img, steering_angle_right = augment(batch_samples[2], steering_angle - 0.4)

        center_img = self.transform(center_img)
        left_img = self.transform(left_img)
        right_img = self.transform(right_img)

        return (center_img, steering_angle_center), (left_img, steering_angle_left), (right_img, steering_angle_right)
      
    def __len__(self):
        return len(self.samples)
transformations = transforms.Compose([transforms.Lambda(lambda x: (x / 255.0) - 0.5)])

params = {'batch_size': 32,
          'shuffle': True,
          'num_workers': 4}

training_set = Dataset(train_samples, transformations)
training_generator = data.DataLoader(training_set, **params)

validation_set = Dataset(validation_samples, transformations)
validation_generator = data.DataLoader(validation_set, **params)
def __init__(self):
    super(NetworkDense, self).__init__()
    self.conv_layers = nn.Sequential(
        nn.Conv2d(3, 24, 5, stride=2),
        nn.ELU(),
        nn.Conv2d(24, 36, 5, stride=2),
        nn.ELU(),
        nn.Conv2d(36, 48, 5, stride=2),
        nn.ELU(),
        nn.Conv2d(48, 64, 3),
        nn.ELU(),
        nn.Conv2d(64, 64, 3),
        nn.Dropout(0.25)
    )
    self.linear_layers = nn.Sequential(
        nn.Linear(in_features=64 * 2 * 33, out_features=100),
        nn.ELU(),
        nn.Linear(in_features=100, out_features=50),
        nn.ELU(),
        nn.Linear(in_features=50, out_features=10),
        nn.Linear(in_features=10, out_features=1)
    )
    
def forward(self, input):  
    input = input.view(input.size(0), 3, 70, 320)
    output = self.conv_layers(input)
    output = output.view(output.size(0), -1)
    output = self.linear_layers(output)
    return output


class NetworkLight(nn.Module):

def __init__(self):
    super(NetworkLight, self).__init__()
    self.conv_layers = nn.Sequential(
        nn.Conv2d(3, 24, 3, stride=2),
        nn.ELU(),
        nn.Conv2d(24, 48, 3, stride=2),
        nn.MaxPool2d(4, stride=4),
        nn.Dropout(p=0.25)
    )
    self.linear_layers = nn.Sequential(
        nn.Linear(in_features=48*4*19, out_features=50),
        nn.ELU(),
        nn.Linear(in_features=50, out_features=10),
        nn.Linear(in_features=10, out_features=1)
    )
    

def forward(self, input):
    input = input.view(input.size(0), 3, 70, 320)
    output = self.conv_layers(input)
    output = output.view(output.size(0), -1)
    output = self.linear_layers(output)
    return output
model = NetworkLight()
optimizer = optim.Adam(model.parameters(), lr=0.0001)

criterion = nn.MSELoss()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') 
print('device is: ', device)

def toDevice(datas, device):
  
  imgs, angles = datas
  return imgs.float().to(device), angles.float().to(device)
max_epochs = 22

for epoch in range(max_epochs):
    
    model.to(device)
    
    # Training
    train_loss = 0
    model.train()
    for local_batch, (centers, lefts, rights) in enumerate(training_generator):
        # Transfer to GPU
        centers, lefts, rights = toDevice(centers, device), toDevice(lefts, device), toDevice(rights, device)
        
        # Model computations
        optimizer.zero_grad()
        datas = [centers, lefts, rights]        
        for data in datas:
            imgs, angles = data
#             print("training image: ", imgs.shape)
            outputs = model(imgs)
            loss = criterion(outputs, angles.unsqueeze(1))
            loss.backward()
            optimizer.step()

            train_loss += loss.data[0].item()
            
        if local_batch % 100 == 0:
            print('Loss: %.3f '
                 % (train_loss/(local_batch+1)))

    
    # Validation
    model.eval()
    valid_loss = 0
    with torch.set_grad_enabled(False):
        for local_batch, (centers, lefts, rights) in enumerate(validation_generator):
            # Transfer to GPU
            centers, lefts, rights = toDevice(centers, device), toDevice(lefts, device), toDevice(rights, device)
        
            # Model computations
            optimizer.zero_grad()
            datas = [centers, lefts, rights]        
            for data in datas:
                imgs, angles = data
#                 print("Validation image: ", imgs.shape)
                outputs = model(imgs)
                loss = criterion(outputs, angles.unsqueeze(1))
                
                valid_loss += loss.data[0].item()

            if local_batch % 100 == 0:
                print('Valid Loss: %.3f '
                     % (valid_loss/(local_batch+1)))
state = {
        'model': model.module if device == 'cuda' else model,
        }

torch.save(state, 'model.h5')
"D:\VICO\Back up\venv\Scripts\python.exe" "D:/VICO/Back up/venv/Scripts/self_driving_car.py"
device is:  cpu
Traceback (most recent call last):
  File "D:/VICO/Back up/venv/Scripts/self_driving_car.py", line 163, in <module>
    for local_batch, (centers, lefts, rights) in enumerate(training_generator):
  File "D:\VICO\Back up\venv\lib\site-packages\torch\utils\data\dataloader.py", line 291, in __iter__
    return _MultiProcessingDataLoaderIter(self)
  File "D:\VICO\Back up\venv\lib\site-packages\torch\utils\data\dataloader.py", line 737, in __init__
    w.start()
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
    self._popen = self._Popen(self)
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x0000002F2175B048>: attribute lookup <lambda> on __main__ failed

Process finished with exit code 1
transformations = transforms.Compose([transforms.Lambda(lambda x: (x / 255.0) - 0.5)])
def _my_normalization(x):
    return x/255.0 - 0.5

transformations = transforms.Compose([transforms.Lambda(_my_normalization])