GPUaccelerated
GPUaccelerated OP t1_iu4tflp wrote
Reply to comment by suflaj in Do companies actually care about their model's training/inference speed? by GPUaccelerated
This makes sense. Scaling horizontally is usually the case. Thank you for commenting!
But I would argue that hardware for inference is actually bought more than one would assume. I have many clients who purchase mini-workstations to put in settings where data processing and inference jobs are done in the same premise. To limit latency and data travel.
GPUaccelerated OP t1_iu4smhu wrote
Reply to comment by THE_REAL_ODB in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Definitely. But important enough to spend $ simply for increasing speed? That's what I'm trying to figure out.
GPUaccelerated OP t1_iu1s1xs wrote
Reply to comment by LastVariation in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
Thanks for the comment! You're definitely right.
GPUaccelerated t1_ireeqet wrote
Reply to Deeplearning and multi-gpu or not by ronaldxd2
It’s definitely worth the test! Have some fun and play around with tensorflow. Once you have the 3 cards set up to work on the same job, test it. Compare your results to running individual jobs. I personally think they’ll do better alone but you should check it out for yourself. :) have fun!
GPUaccelerated t1_ireefhv wrote
Reply to comment by Knurpel in Deeplearning and multi-gpu or not by ronaldxd2
The 3070ti and 3080ti do not support nvlink.
GPUaccelerated OP t1_iu4u69c wrote
Reply to comment by hp2304 in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Wow, your perspective is really something to take note of. I appreciate your comment!
What I'm understanding is that speed matters more in inference than it does for training.