Slrum 分布式訓(xùn)練+提交作業(yè)

創(chuàng)建分布式+采樣

    if hparams.multi_gpu:
        logger.info('-------------  分布式訓(xùn)練 -----------------')
        torch.distributed.init_process_group(backend='nccl')
        local_rank = torch.distributed.get_rank()
        torch.cuda.set_device(local_rank)
        device = torch.device("cuda", local_rank)  # local_rank是當(dāng)前的一個(gè)gpu
        nprocs = torch.cuda.device_count()
        
        # 分布式采樣
        train_sampler = torch.utils.data.distributed.DistributedSampler(train_data, shuffle=True)
        valid_sampler = torch.utils.data.distributed.DistributedSampler(valid_data, shuffle=False)
        test_sampler = torch.utils.data.distributed.DistributedSampler(test_data, shuffle=False)
    
        train_loader = DataLoader(train_data, batch_size=hparams.batch_size, collate_fn=collate, sampler=train_sampler)
        valid_loader = DataLoader(valid_data, batch_size=hparams.batch_size, collate_fn=collate, sampler=valid_sampler)
        test_loader = DataLoader(test_data, batch_size=hparams.batch_size, collate_fn=collate, sampler=test_sampler)

模型部署

     model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank],output_device=local_rank)

由于模型已被包裝,這時(shí)候直接調(diào)用模型組件會(huì)報(bào)錯(cuò),比如:model.fc, 會(huì)顯示沒(méi)有屬性, 因此一下操作

if isinstance(model, torch.nn.DataParallel) or isinstance(model,torch.nn.parallel.DistributedDataParallel):
      model = model.module

損失loss、 梯度和準(zhǔn)確度等整合。 由于不同的GPU加載的數(shù)據(jù)不一樣,會(huì)導(dǎo)致算出來(lái)的Loss、acc等不一樣,需要合并

def average_gradients(model):
    """ Gradient averaging. """
    size = float(dist.get_world_size())
    for param in model.parameters():
        if param.grad is None:
            continue
        dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM)
        param.grad.data /= size
def reduce_mean(tensor, nprocs):
    rt = torch.tensor(tensor).to(device).clone()
    dist.all_reduce(rt, op=dist.ReduceOp.SUM)  # sum-up as the all-reduce operation
    rt /= nprocs  # NOTE this is necessary, since all_reduce here do not perform average
    return rt

應(yīng)用的時(shí)候

        total_loss += loss.item()
        # 多個(gè)GPU需要進(jìn)行整合
        if hparams.multi_gpu:
            average_gradients(model)
        loss.backward()
        optimizer.step()
        scheduler.step()
        if hparams.multi_gpu:
            acc = reduce_mean(acc, nprocs)
  • SLurm 提交作業(yè),提交 *.sh文件, 或在bash交互環(huán)境中直接輸入命令即可。者當(dāng)你提交作業(yè)后, print函數(shù)等會(huì)多個(gè)進(jìn)行輸出,那表示是正確的
# Distributed-DataParallel (Multi-GPUs)
env CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2\
 train.py \
 --dataset Subj \
 --epochs 50 \
 --learning_rate 0.0005\
 --batch_size 128 \
 --multi_gpu
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

友情鏈接更多精彩內(nèi)容