Submitted by GPUaccelerated t3_yf5jm3 in deeplearning
ShadowStormDrift t1_iu53ih6 wrote
Reply to comment by GPUaccelerated in Do companies actually care about their model's training/inference speed? by GPUaccelerated
Of course!
The semantic search as well as a few other key features haven't made it up yet. We're aiming to have them up end of November, mid December.
We've got a two server setup with the second being our "Work-horse" intended for GPU related jobs. It's an RTX 3090 with 32GB VRAM, 64GB DDR4 RAM and a 8 core CPU (I forget it's exact setup)
GPUaccelerated OP t1_iuim3cp wrote
Very cool! But I think you mean 24GB of VRAM for the 3090?
Issues loading the web page, btw.
ShadowStormDrift t1_iuivphh wrote
GPUaccelerated OP t1_iuixkzj wrote
So cool! Good for you!
Viewing a single comment thread. View all comments