Submitted by imgonnarelph t3_11wqmga in MachineLearning
cbsudux t1_jd1qzp7 wrote
How long did the training take on an A100?
benfavre t1_jd2n1cg wrote
1 epoch of finetuning the 30B model with llama-lora implementation, mini-batch-size=2, maxlen=384, is about 11 hours.
2muchnet42day t1_jd3pu0m wrote
Can you train with 24 gigs of vram ?
Viewing a single comment thread. View all comments