Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/345.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 错误:RuntimeError:在当前进程完成其引导阶段之前,尝试启动新进程_Python_Deep Learning_Torchvision - Fatal编程技术网

Python 错误:RuntimeError:在当前进程完成其引导阶段之前,尝试启动新进程

Python 错误:RuntimeError:在当前进程完成其引导阶段之前,尝试启动新进程,python,deep-learning,torchvision,Python,Deep Learning,Torchvision,运行以下脚本后出现错误: --编码:utf-8-- 进口这些东西 步骤1:从日志文件中读取 步骤2:将数据分为训练集和验证集 步骤3a:为dataloader定义扩充、转换过程、参数和数据集 步骤4:定义网络 步骤5:定义优化器 步骤6:检查设备并定义将张量移动到该设备的函数 第7步:根据定义的最大时段对网络进行培训和验证 步骤8:定义状态并将模型wrt保存到状态 这将显示错误消息: “D:\VICO\backup\venv\Scripts\python.exe”“D:/VICO/backup/

运行以下脚本后出现错误:

--编码:utf-8-- 进口这些东西 步骤1:从日志文件中读取 步骤2:将数据分为训练集和验证集 步骤3a:为dataloader定义扩充、转换过程、参数和数据集 步骤4:定义网络 步骤5:定义优化器 步骤6:检查设备并定义将张量移动到该设备的函数 第7步:根据定义的最大时段对网络进行培训和验证 步骤8:定义状态并将模型wrt保存到状态 这将显示错误消息:

“D:\VICO\backup\venv\Scripts\python.exe”“D:/VICO/backup/venv/Scripts/self\u driving\u car.py” 设备是:cpu 设备是:cpu 回溯(最近一次呼叫最后一次): 文件“”,第1行,在 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py”,第105行,在spawn\u main中 出口代码=_主(fd) 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py”,第114行,在主目录中 准备(准备数据) 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py”,第225行,在prepare中 _从路径修复主路径(数据['init\u main\u from\u path']) 文件“C:\Users\isonata\AppData\Local\Programs\Python\37\lib\multiprocessing\spawn.py”,第277行,位于\u fixup\u main\u from\u路径中 运行\u name=“mp\u main”) 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\runpy.py”,第263行,在运行路径中 回溯(最近一次呼叫最后一次): 文件“D:/VICO/Back-up/venv/Scripts/self_driving_car.py”,第165行,in pkg_name=pkg_name,script_name=fname) 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\runpy.py”,第96行,在运行模块代码中 对于enumerate(training_generator)中的本地_批次(中心、左侧、右侧): 文件“D:\VICO\backup\venv\lib\site packages\torch\utils\data\dataloader.py”,第291行,在iter中 模块名称、模块规格、组件名称、脚本名称) 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\runpy.py”,第85行,在运行代码中 exec(代码、运行\全局) 文件“D:\VICO\backup\venv\Scripts\self\u driving\u car.py”,第165行,在 return\u多处理数据装入器(self) 文件“D:\VICO\backup\venv\lib\site packages\torch\utils\data\dataloader.py”,第737行,位于init 对于enumerate(training_generator)中的本地_批次(中心、左侧、右侧): 文件“D:\VICO\backup\venv\lib\site packages\torch\utils\data\dataloader.py”,第291行,在iter中 return\u多处理数据装入器(self) 文件“D:\VICO\backup\venv\lib\site packages\torch\utils\data\dataloader.py”,第737行,位于init w、 开始() 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py”,第112行,在开始处 self.\u popen=self.\u popen(self) 文件“C:\Users\isonata\AppData\Local\Programs\Python\37\lib\multiprocessing\context.py”,第223行,在 w、 开始() 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py”,第112行,在开始处 返回_default_context.get_context().Process._Popen(Process_obj) 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py”,第322行,在 self.\u popen=self.\u popen(self) 文件“C:\Users\isonata\AppData\Local\Programs\Python\37\lib\multiprocessing\context.py”,第223行,在 返回Popen(过程对象) 文件“C:\Users\isonata\AppData\Local\Programs\Python\37\lib\multiprocessing\popen\u spawn\u win32.py”,第89行,在init 返回_default_context.get_context().Process._Popen(Process_obj) 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py”,第322行,在 减少.转储(进程对象,到子进程) 文件“C:\Users\isonata\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduce.py”,第60行,转储文件 返回Popen(过程对象) 文件“C:\Users\isonata\AppData\Local\Programs\Python\37\lib\multiprocessing\popen\u spawn\u win32.py”,第46行,在init ForkingPickler(文件、协议).dump(obj) 断管错误:[Errno 32]断管 prep_data=spawn.get_preparation_data(进程对象名称) 文件“C:\Users\isonata\AppData\Local\Programs\Python\37\lib\multiprocessing\spawn.py”,第143行,在get\u preparation\u data中 _选中\u not \u导入\u main() 文件“C:\Users\isonata\AppData\Local\Programs\Python\37\lib\multiprocessing\spawn.py”,第136行,在检查\u not \u导入\u main中 不会冻结以生成可执行文件。“” 运行时错误: 已尝试在启动之前启动新进程 当前进程已完成其引导阶段

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.
进程已完成,退出代码为1

我不确定解决问题的下一步是什么,简单地说:

if __name__ == "__main__":
        main()
避免在每个循环中重新加载模块

train_len = int(0.8*len(samples))
valid_len = len(samples) - train_len
train_samples, validation_samples = data.random_split(samples, lengths=[train_len, valid_len])
def augment(imgName, angle):
  name = 'data/IMG/' + imgName.split('/')[-1]
  current_image = cv2.imread(name)
  current_image = current_image[65:-25, :, :]
  if np.random.rand() < 0.5:
    current_image = cv2.flip(current_image, 1)
    angle = angle * -1.0  
  return current_image, angle

class Dataset(data.Dataset):

    def __init__(self, samples, transform=None):

        self.samples = samples
        self.transform = transform

    def __getitem__(self, index):
      
        batch_samples = self.samples[index]
        
        steering_angle = float(batch_samples[3])
        
        center_img, steering_angle_center = augment(batch_samples[0], steering_angle)
        left_img, steering_angle_left = augment(batch_samples[1], steering_angle + 0.4)
        right_img, steering_angle_right = augment(batch_samples[2], steering_angle - 0.4)

        center_img = self.transform(center_img)
        left_img = self.transform(left_img)
        right_img = self.transform(right_img)

        return (center_img, steering_angle_center), (left_img, steering_angle_left), (right_img, steering_angle_right)
      
    def __len__(self):
        return len(self.samples)
def _my_normalization(x):
    return x/255.0 - 0.5
transformations = transforms.Compose([transforms.Lambda(_my_normalization)])

params = {'batch_size': 32,
          'shuffle': True,
          'num_workers': 4}

training_set = Dataset(train_samples, transformations)
training_generator = data.DataLoader(training_set, **params)

validation_set = Dataset(validation_samples, transformations)
validation_generator = data.DataLoader(validation_set, **params)
class NetworkDense(nn.Module):

    def __init__(self):
        super(NetworkDense, self).__init__()
        self.conv_layers = nn.Sequential(
            nn.Conv2d(3, 24, 5, stride=2),
            nn.ELU(),
            nn.Conv2d(24, 36, 5, stride=2),
            nn.ELU(),
            nn.Conv2d(36, 48, 5, stride=2),
            nn.ELU(),
            nn.Conv2d(48, 64, 3),
            nn.ELU(),
            nn.Conv2d(64, 64, 3),
            nn.Dropout(0.25)
        )
        self.linear_layers = nn.Sequential(
            nn.Linear(in_features=64 * 2 * 33, out_features=100),
            nn.ELU(),
            nn.Linear(in_features=100, out_features=50),
            nn.ELU(),
            nn.Linear(in_features=50, out_features=10),
            nn.Linear(in_features=10, out_features=1)
        )
        
    def forward(self, input):  
        input = input.view(input.size(0), 3, 70, 320)
        output = self.conv_layers(input)
        output = output.view(output.size(0), -1)
        output = self.linear_layers(output)
        return output


class NetworkLight(nn.Module):

    def __init__(self):
        super(NetworkLight, self).__init__()
        self.conv_layers = nn.Sequential(
            nn.Conv2d(3, 24, 3, stride=2),
            nn.ELU(),
            nn.Conv2d(24, 48, 3, stride=2),
            nn.MaxPool2d(4, stride=4),
            nn.Dropout(p=0.25)
        )
        self.linear_layers = nn.Sequential(
            nn.Linear(in_features=48*4*19, out_features=50),
            nn.ELU(),
            nn.Linear(in_features=50, out_features=10),
            nn.Linear(in_features=10, out_features=1)
        )
        

    def forward(self, input):
        input = input.view(input.size(0), 3, 70, 320)
        output = self.conv_layers(input)
        output = output.view(output.size(0), -1)
        output = self.linear_layers(output)
        return output
model = NetworkLight()
optimizer = optim.Adam(model.parameters(), lr=0.0001)

criterion = nn.MSELoss()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print('device is: ', device)

def toDevice(datas, device):
  
  imgs, angles = datas
  return imgs.float().to(device), angles.float().to(device)
max_epochs = 22

for epoch in range(max_epochs):
    
    model.to(device)
    
    # Training
    train_loss = 0
    model.train()
    for local_batch, (centers, lefts, rights) in enumerate(training_generator):
        # Transfer to GPU
        centers, lefts, rights = toDevice(centers, device), toDevice(lefts, device), toDevice(rights, device)
        
        # Model computations
        optimizer.zero_grad()
        datas = [centers, lefts, rights]        
        for data in datas:
            imgs, angles = data
#             print("training image: ", imgs.shape)
            outputs = model(imgs)
            loss = criterion(outputs, angles.unsqueeze(1))
            loss.backward()
            optimizer.step()

            train_loss += loss.data[0].item()
            
        if local_batch % 100 == 0:
            print('Loss: %.3f '
                 % (train_loss/(local_batch+1)))

    
    # Validation
    model.eval()
    valid_loss = 0
    with torch.set_grad_enabled(False):
        for local_batch, (centers, lefts, rights) in enumerate(validation_generator):
            # Transfer to GPU
            centers, lefts, rights = toDevice(centers, device), toDevice(lefts, device), toDevice(rights, device)
        
            # Model computations
            optimizer.zero_grad()
            datas = [centers, lefts, rights]        
            for data in datas:
                imgs, angles = data
#                 print("Validation image: ", imgs.shape)
                outputs = model(imgs)
                loss = criterion(outputs, angles.unsqueeze(1))
                
                valid_loss += loss.data[0].item()

            if local_batch % 100 == 0:
                print('Valid Loss: %.3f '
                     % (valid_loss/(local_batch+1)))
state = {
        'model': model.module if device == 'cuda' else model,
        }

torch.save(state, 'model.h5')
    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.
if __name__ == "__main__":
        main()