Submitted by Qwillbehr t3_11xpohv in MachineLearning
ajt9000 t1_jd5w735 wrote
Speaking of this do you guys know of ways to inference and/or train models on graphics cards with insufficient vram? I have had some success with breaking up models into multiple models and then inferencing them as a boosted ensemble but thats obviously not possible with lots of architectures.
I'm just wondering if you can do that with an unfavorable architecture as long as its pretrained.
Viewing a single comment thread. View all comments