JanssonsFrestelse t1_j0l89ve wrote
Reply to comment by LetterRip in [P] Using LoRA to efficiently fine-tune diffusion models. Output model less than 4MB, two times faster to train, with better performance. (Again, with Stable Diffusion) by cloneofsimo
Same here with 8GB VRAM, although looks like I can't use mixed_precision=fp16 with my RTX 2070, so that might be why.
Viewing a single comment thread. View all comments