Submitted by KlausMich t3_zql367 in MachineLearning
Hi everyone!
I want to use a server to continuously train my ML models without keeping on my pc 24/7. I am currently running fairly simple deep learning models that would take a week on my computer.
So far the best solution to start with that I found is the AWS t2.micro instance which could be good for starting. I've seen that also google cloud and Nvidia have other options.
Could you please guide me thru or giving me suggestions about which one could be better as I am not an expert and it is the first time I do it?
ggf31416 t1_j0ypnpp wrote
Training a large model only in CPU is madness, it will take forever and waste a lot of electricity. You need a GPU with CUDA or an equivalent solution fully supported by your framework. See e.g. this benchmark.
A t2.micro instance may be free during the free trial but is useless for anything resource intensive. You are much better off just using Google Colab or Kaggle notebooks.
If you have to train models very often (like everyday) and 24GB from a RTX3090 or better a RTX4090 is enough, a dedicated computer is the most cost effective way in the long run. If you cant afford a RTX3090 and 12GB is enough, a 3060 with 12GB will do (for ML we usually want as much VRAM as possible, raw computing power often is not the bottleneck).
Vast ai is a cost effective way of renting computing power for non-constant use, much cheaper than AWS or GCP, but beware that because of how it works the instance is not fully secure against attacks from the host so you can't use it with sensitive data.
Any good CUDA GPU will be able to train with a small dataset in less of a day, so take that into account for the decision between purchasing a GPU and cloud computing.