纵然 PyTorch 已经具有了大量尺度丧失函数,你偶然也也许必要建设本身的丧失函数。为了做到这一点,你必要建设一个独立的「losses.py」文件,而且通过扩展「nn.Module」建设你的自界说丧失函数:
- class CustomLoss(torch.nn.Module):
-
- def __init__(self):
- super(CustomLoss,self).__init__()
-
- def forward(self,x,y):
- loss = torch.mean((x - y)**2)
- return loss
5. 实习模子的最佳代码布局
对付实习的最佳代码布局,我们必要行使以下两种模式:
- 行使 prefetch_generator 中的 BackgroundGenerator 来加载下一个批量数据
- 行使 tqdm 监控实习进程,并展示计较服从,这能辅佐我们找到数据加载流程中的瓶颈
- # import statements
- import torch
- import torch.nn as nn
- from torch.utils import data
- ...
-
- # set flags / seeds
- torch.backends.cudnn.benchmark = True
- np.random.seed(1)
- torch.manual_seed(1)
- torch.cuda.manual_seed(1)
- ...
-
- # Start with main code
- if __name__ == '__main__':
- # argparse for additional flags for experiment
- parser = argparse.ArgumentParser(description="Train a network for ...")
- ...
- opt = parser.parse_args()
-
- # add code for datasets (we always use train and validation/ test set)
- data_transforms = transforms.Compose([
- transforms.Resize((opt.img_size, opt.img_size)),
- transforms.RandomHorizontalFlip(),
- transforms.ToTensor(),
- transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
- ])
-
- train_dataset = datasets.ImageFolder(
- root=os.path.join(opt.path_to_data, "train"),
- transform=data_transforms)
- train_data_loader = data.DataLoader(train_dataset, ...)
-
- test_dataset = datasets.ImageFolder(
- root=os.path.join(opt.path_to_data, "test"),
- transform=data_transforms)
- test_data_loader = data.DataLoader(test_dataset ...)
- ...
-
- # instantiate network (which has been imported from *networks.py*)
- net = MyNetwork(...)
- ...
-
- # create losses (criterion in pytorch)
- criterion_L1 = torch.nn.L1Loss()
- ...
-
- # if running on GPU and we want to use cuda move model there
- use_cuda = torch.cuda.is_available()
- if use_cuda:
- netnet = net.cuda()
- ...
-
- # create optimizers
- optim = torch.optim.Adam(net.parameters(), lr=opt.lr)
- ...
-
- # load checkpoint if needed/ wanted
- start_n_iter = 0
- start_epoch = 0
- if opt.resume:
- ckpt = load_checkpoint(opt.path_to_checkpoint) # custom method for loading last checkpoint
- net.load_state_dict(ckpt['net'])
- start_epoch = ckpt['epoch']
- start_n_iter = ckpt['n_iter']
- optim.load_state_dict(ckpt['optim'])
- print("last checkpoint restored")
- ...
-
- # if we want to run experiment on multiple GPUs we move the models there
- net = torch.nn.DataParallel(net)
- ...
-
- # typically we use tensorboardX to keep track of experiments
- writer = SummaryWriter(...)
-
- # now we start the main loop
- n_iter = start_n_iter
- for epoch in range(start_epoch, opt.epochs):
- # set models to train mode
- net.train()
- ...
-
- # use prefetch_generator and tqdm for iterating through data
- pbar = tqdm(enumerate(BackgroundGenerator(train_data_loader, ...)),
- total=len(train_data_loader))
- start_time = time.time()
-
- # for loop going through dataset
- for i, data in pbar:
- # data preparation
- img, label = data
- if use_cuda:
- imgimg = img.cuda()
- labellabel = label.cuda()
- ...
-
- # It's very good practice to keep track of preparation time and computation time using tqdm to find any issues in your dataloader
- prepare_time = start_time-time.time()
-
- # forward and backward pass
- optim.zero_grad()
- ...
- loss.backward()
- optim.step()
- ...
-
- # udpate tensorboardX
- writer.add_scalar(..., n_iter)
- ...
-
- # compute computation time and *compute_efficiency*
- process_time = start_time-time.time()-prepare_time
- pbar.set_description("Compute efficiency: {:.2f}, epoch: {}/{}:".format(
- process_time/(process_time+prepare_time), epoch, opt.epochs))
- start_time = time.time()
-
- # maybe do a test pass every x epochs
- if epoch % x == x-1:
- # bring models to evaluation mode
- net.eval()
- ...
- #do some tests
- pbar = tqdm(enumerate(BackgroundGenerator(test_data_loader, ...)),
- total=len(test_data_loader))
- for i, data in pbar:
- ...
-
- # save checkpoint if needed
- ...
三、PyTorch 的多 GPU 实习
PyTorch 中有两种行使多 GPU 举办实习的模式。
按照我们的履历,这两种模式都是有用的。然而,第一种要领获得的功效更好、必要的代码更少。因为第二种要领中的 GPU 间的通讯更少,好像具有稍微的机能上风。
1. 对每个收集输入的 batch 举办切分
最常见的一种做法是直接将全部收集的输入切分为差异的批量数据,并分派给各个 GPU。
这样一来,在 1 个 GPU 上运行批量巨细为 64 的模子,在 2 个 GPU 上运行时,每个 batch 的巨细就酿成了 32。这个进程可以行使「nn.DataParallel(model)」包装器自动完成。
2. 将全部收集打包到一个超等收集中,并对输入 batch 举办切分
这种模式不太常用。下面的代码客栈向各人展示了 Nvidia 实现的 pix2pixHD,它有这种要领的实现。
地点:https://github.com/NVIDIA/pix2pixHD
四、PyTorch 中该做和不应做的
1. 在「nn.Module」的「forward」要领中停止行使 Numpy 代码
Numpy 是在 CPU 上运行的,它比 torch 的代码运行得要慢一些。因为 torch 的开拓思绪与 numpy 相似,以是大大都 Numpy 中的函数已经在 PyTorch 中获得了支持。
2. 将「DataLoader」从主措施的代码中疏散
载入数据的事变流程应该独立于你的主实习措施代码。PyTorch 行使「background」历程越发高效地载入数据,而不会滋扰到主实习历程。
3. 不要在每一步中都记录功效
凡是而言,我们要实习我们的模子好几千步。因此,为了减小计较开销,每隔 n 步对丧失和其余的计较功效举办记录就足够了。尤其是,在实习进程中将中间功效生涯成图像,这种开销长短常大的。
4. 行使呼吁行参数 (编辑:湖南网)
【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!
|