Viewing a single comment thread. View all comments

--dany-- t1_iy2149l wrote

Not too much by some benchmarks. So speed is not your point here. Your main concern is if the model and training data can fit your VRAM.

6

somebodyenjoy OP t1_iy23k2m wrote

I do hyperparameter tuning too, so the same model will have to train multiple times. More times the better, as I can try more architectures. So speed is important. But you’re saying that 4090 is not much better than 3090 in terms of speed huh

1

--dany-- t1_iy29wqs wrote

I’m saying 2x 3090s are not much better than a 4090. According to lambda labs benchmarks a 4090 is about 1.3 to 1.9 times faster than a 3090. If you’re after speed then a 4090 definitely makes more sense as it’s only slightly slower but is much more power efficient and cheaper than 2x 3090s.

4