GPUaccelerated

GPUaccelerated OP t1_iu4tflp wrote

This makes sense. Scaling horizontally is usually the case. Thank you for commenting!

But I would argue that hardware for inference is actually bought more than one would assume. I have many clients who purchase mini-workstations to put in settings where data processing and inference jobs are done in the same premise. To limit latency and data travel.

1

GPUaccelerated t1_ireeqet wrote

It’s definitely worth the test! Have some fun and play around with tensorflow. Once you have the 3 cards set up to work on the same job, test it. Compare your results to running individual jobs. I personally think they’ll do better alone but you should check it out for yourself. :) have fun!

1