Submitted by somebodyenjoy t3_z6kr2n in deeplearning
--dany-- t1_iy2149l wrote
Reply to comment by somebodyenjoy in Best GPU for deep learning by somebodyenjoy
Not too much by some benchmarks. So speed is not your point here. Your main concern is if the model and training data can fit your VRAM.
somebodyenjoy OP t1_iy23k2m wrote
I do hyperparameter tuning too, so the same model will have to train multiple times. More times the better, as I can try more architectures. So speed is important. But you’re saying that 4090 is not much better than 3090 in terms of speed huh
--dany-- t1_iy29wqs wrote
I’m saying 2x 3090s are not much better than a 4090. According to lambda labs benchmarks a 4090 is about 1.3 to 1.9 times faster than a 3090. If you’re after speed then a 4090 definitely makes more sense as it’s only slightly slower but is much more power efficient and cheaper than 2x 3090s.
somebodyenjoy OP t1_iy2a1r9 wrote
Exactly what I was thinking. Thanks!
Viewing a single comment thread. View all comments