LetterRip t1_izdam40 wrote
Just tried this and it ran great on a 6GB VRAM card on a laptop with only 16GB of RAM (barely fit into VRAM - using bitsnbytes and xformers I think). I've only tried the corgi example but seemed to work fine. Trying it with a person now.
cloneofsimo OP t1_izdlve0 wrote
Glad it worked for you with such small memory constraints!
LetterRip t1_izdm55i wrote
> Glad it worked for you with such small memory constraints!
Currently training image size 768, and accumulation steps=2.
If steps is set to 2000, will it be going to 4000? It didn't stop at 2000 as expected and is currently over 3500, figured I'd wait till over 4000 to kill it in case the accumulation steps acts as a multiplier. (Went to 3718 and quit, right after I wrote the above).
Teotz t1_izjzdve wrote
Don't leave us hanging!!! :)
How did the training go with a person?
LetterRip t1_izksf4k wrote
It is working, but I need to use prior preservation loss, otherwise all of the words in the phrase have the concept bleed into them. So generating photos for preservation loss now.
LetterRip t1_izm8rkq wrote
It did work, now I can no longer launch lora training even with 768 or 512 (CUDA VRAM exceeded), only 256 no idea what changed.
JanssonsFrestelse t1_j0l89ve wrote
Same here with 8GB VRAM, although looks like I can't use mixed_precision=fp16 with my RTX 2070, so that might be why.
Viewing a single comment thread. View all comments