Viewing a single comment thread. View all comments

thefizzlee t1_iy38vtp wrote

I'm gonna assume the Nvidia A100 80gb edition is out of your budget but that is the gold standard for machine learning, they're usually deployed in clusters of 8 together but one is already better than 2 3090s for deeplearning.

If however you want to choose between 2 3090s or a 4090 and you're running into vram issues I'd go for the dual 3090, clustering gpus is very well supported in machine learning so you'll essentially getting double the performance and from what I know 1 4090 isn't faster than 2 3090s plus you'll double your vram

Edit: if you want to safe a buck and you're software supports it you could also look into the new Radeon rx 7900 xtx, as long a you don't need Cuda support or tensor cores

−1