Submitted by seanrescs t3_10p4lhq in MachineLearning
I am a researcher at a US university and have a budget of 25k to build a PC for training various ML algorithms (e.g. DRL, neuromorphic computing, VAE, etc). I'm trying to decide between going for prebuilds (like https://lambdalabs.com/gpu-workstations/vector) or building with consumer cards like 4090s.
Any advice on which is the most bang for the price? Im not sure how much Im giving up by going for consumer 24g cards vs a6000, 6000 ada but prebuild prices go up quick. Warrantee vs building it myself isn't an issue
synth_mania t1_j6i8qkk wrote
Well you are sacrificing gpu virtualization afaik. Only enterprise cards get native support for that feature without hacks that may or may not work.