People with RTX cards will often rent cloud GPU instances for training when inference/training requires more than 24GB VRAM which it often does. Also, sometimes we just need to shorten the training time with 8x A100 so... yeah renting seems to be the only way to go.
Viewing a single comment thread. View all comments