Submitted by laprika0 t3_yj5xkp in MachineLearning
utopiah t1_iun6a16 wrote
This is not my field but I find this question genuinely surprising.
Why would one even consider this unless prototyping from the actual jungle?
In any other situation where you even just have a 3G connection then delegating to the cloud (or your own on premise machines available online behind a VPN) seems much more efficient as soon as you have any inference and even more so training to run.
Why do I feel the question itself to be surprising? Well because ML is a data based field so the question can be answered with a spreadsheet. Namely your "model" would be optimizing for faster feedback in order to learn better about your problem with as costs your hardware but also your time. If you do spend X hours tinkering with an M1 (or M2 or even "just" a 4090) versus A100 in that or a random cloud e.g AWS or a local OVH by booting on a generalist distribution like Ubuntu versus dedicated setups like lambdalabs.com or coreweave.com or even higher level like HuggingFace on their own infrastructure then IMHO that does give you some insight.
Everything else seems anecdotal because others might not have your workflow.
TL;DR: no unless they are minuscule models and of course if you use it to ssh on remote machines but IMHO you have to figure it out yourselves as we all have different needs.
PS: to clarify and not to sound like an opinionated idiot, even though it's not my field I did run and trains dozens of models locally and remotely.
Viewing a single comment thread. View all comments