GPUaccelerated
GPUaccelerated t1_j5yral4 wrote
Reply to [P] EvoTorch 0.4.0 dropped with GPU-accelerated implementations of CMA-ES, MAP-Elites and NSGA-II. by NaturalGradient
This is really cool.
GPUaccelerated OP t1_iuixkzj wrote
Reply to comment by ShadowStormDrift in Do companies actually care about their model's training/inference speed? by GPUaccelerated
So cool! Good for you!
GPUaccelerated OP t1_iuitwy2 wrote
Reply to comment by suflaj in Do companies actually care about their model's training/inference speed? by GPUaccelerated
not exactly sure, i'm not a lawyer. But it's something that gets taken very seriously by a lot of my medical field clients. Its definitely something for their side, not mine. I just help those specific clients go on-prem
GPUaccelerated OP t1_iuimz8s wrote
Reply to comment by Throwaway00000000028 in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
Right!
GPUaccelerated OP t1_iuimwx3 wrote
Reply to comment by Appropriate_Ant_4629 in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
The way you separated it in 2 categories is very useful for understanding. Thank you!
GPUaccelerated OP t1_iuimm3t wrote
Reply to comment by suflaj in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Right, but for example in the medical field, It's not a trust issue. It's a matter of laws that prevent patient data from leaving the physician's premise.
GPUaccelerated OP t1_iuim3cp wrote
Reply to comment by ShadowStormDrift in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Very cool! But I think you mean 24GB of VRAM for the 3090?
Issues loading the web page, btw.
GPUaccelerated OP t1_iuilu92 wrote
Reply to comment by mayiSLYTHERINyourbed in Do companies actually care about their model's training/inference speed? by GPUaccelerated
okay cool! Thanks for explaining
GPUaccelerated OP t1_iuilp7x wrote
Reply to comment by sckuzzle in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Got it! Thank you.
GPUaccelerated OP t1_iuill54 wrote
Reply to comment by Rephil1 in Do companies actually care about their model's training/inference speed? by GPUaccelerated
That's pretty intense.
GPUaccelerated OP t1_iu4z9qq wrote
Reply to comment by Ogawaa in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
Yup. The thought framework you just described is popular based on my experience. But you worded it in a way that really makes it understood.
Thank you for sharing!
GPUaccelerated OP t1_iu4ywuh wrote
Reply to comment by cnapun in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
Your points are super valid. This Is what I'm generally understanding.
Adding features and optimizing look like a viscous circle more often than not.
Thank you for commenting!
GPUaccelerated OP t1_iu4ygne wrote
Reply to comment by BackgroundChemist in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
Yeah that makes a lot of sense because we're not just dealing with one bottleneck. There are many possibilities, as you stated.
Thank you for your comment!
GPUaccelerated OP t1_iu4y6sh wrote
Reply to comment by LordDGarcia in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
You put so much into perspective. And It's rare that I get contact with your industry. I'd love to learn more.
Thank you for your well described comment. I definitely appreciate it!
GPUaccelerated OP t1_iu4xju0 wrote
Reply to comment by PassionatePossum in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
This is what I'm seeing the most. Which makes so much sense for your use case.
Thank you for sharing!
GPUaccelerated OP t1_iu4xa3i wrote
Reply to comment by badabummbadabing in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
This perspective and use case is really important to note. Thank you for sharing! Your last comment makes so much sense.
GPUaccelerated OP t1_iu4wwwx wrote
Reply to comment by _DarthBob_ in [D] Do companies actually care about their model's training/inference speed? by GPUaccelerated
Ok yeah that's what I'm understanding. Thank you for your comment!
GPUaccelerated OP t1_iu4wl57 wrote
Reply to comment by suflaj in Do companies actually care about their model's training/inference speed? by GPUaccelerated
That's right but sometimes data sensitivity prevents the use of cloud.
GPUaccelerated OP t1_iu4wasf wrote
Reply to comment by wingedrasengan927 in Do companies actually care about their model's training/inference speed? by GPUaccelerated
For which use case if you don't mind me asking? And are you referring to inference or training?
GPUaccelerated OP t1_iu4w6oh wrote
Reply to comment by sckuzzle in Do companies actually care about their model's training/inference speed? by GPUaccelerated
The perspective of your use case makes so much sense. I appreciate you sharing that info!
Mind sharing which use case that would be? I'm also trying to pin point which industries care about model speed.
GPUaccelerated OP t1_iu4vppp wrote
Reply to comment by VonPosen in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Really interesting. And that's kind of where my mind was leaning towards.
Faster training usually means more training from a cost perspective.
Thanks for sharing!
GPUaccelerated OP t1_iu4ve0v wrote
Reply to comment by mayiSLYTHERINyourbed in Do companies actually care about their model's training/inference speed? by GPUaccelerated
OK right. That's also a project with immense scale.
I guess the bigger the project, the more inference speed is required. But I've never heard about caring deeply about the ms in training. Mind sharing why that was important in that use case?
GPUaccelerated OP t1_iu4uxld wrote
Reply to comment by konze in Do companies actually care about their model's training/inference speed? by GPUaccelerated
That makes a lot of sense. And also really cool. Also, people resorting to ASICs for inference are definitely playing in the big boy leagues.
Thanks for sharing!
GPUaccelerated OP t1_iu4umuw wrote
Reply to comment by ShadowStormDrift in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Yeah, see in your use case, speed makes so much sense. Thank you for sharing.
Mind sharing that site with us here?
I'm always interested in taking a look at cool projects.
Also what kind of hardware is currently tasked with your project's inference?
GPUaccelerated t1_j60zhrx wrote
Reply to Cloud VM GPU is much slower than my local GPU by Infamous_Age_7731
It's simply because the 3080Ti is actually a faster GPU than the A100. The reason the A100 exists is to fit large models without having to parallelize across multiple cards. *For most cases*