Submitted by [deleted] t3_zalbxu in deeplearning
suflaj t1_iym94jr wrote
Reply to comment by normie1990 in Will I ever need more than 24GB VRAM to train models like Detectron2 and YOLOv5? by [deleted]
> I probably should have specified that I'll do fine tuning, not training from scratch, if that makes any difference.
Unless you're freezing layers, it doesn't.
> I know it's a software feature, AFAIK pytorch supports it, right?
No. PyTorch supports Data Parallelism. To get pooling in its full meaning, you need Model Parallelism, for which you'd have to write your own multi-GPU layers and a load balancing heuristic.
Be as it be, using Pytorch itself, NVLink gets you less than 5% gains. Obviously not worth compared to 30-90% gains from a 4090. You need stuff like Apex to see visible improvements, but they do not compare to generational leaps, nor do they parallelize the model (you still have to do it yourself). Apex' data parallelism is similar to PyTorches anyways.
Once you parallelize your model, however, you're bound to be bottlenecked by bandwidth. This is the reason it's not done more often, as it makes sense only if the model itself is very large, yet its gradients fit in pooled memory. NVLink provides only 300 GB/s of bandwidth in the best case scenario, amounting to roughly 30% performance gains in bandwidth bottlenecked tasks in the best case.
normie1990 t1_iyma2hh wrote
>Be as it be, using Pytorch itself, NVLink gets you less than 5% gains. Obviously not worth compared to 30-90% gains from a 4090.
Thanks, I think I have my answer.
Obviously I'm new to ML and didn't understand everything that you tried to explain (which I appreciate). I know that much - I will be freezing layers when fine-tuning, so from your earlier comment I guess I won't need more than 24GB.
Viewing a single comment thread. View all comments