sam__izdat t1_iziau6e wrote
Reply to comment by Why_Soooo_Serious in [P] Using LoRA to efficiently fine-tune diffusion models. Output model less than 4MB, two times faster to train, with better performance. (Again, with Stable Diffusion) by cloneofsimo
What are you having trouble following? I'm not trying to be rude, but it's already a -less technical- method because HF's diffusers and accelerate stuff will download everything for you and set it all up. I rather it was a little more technical, because it's a bit of a black box.
I was having problems with unhelpful error messages until I updated transformers. I'm still having CUDA illegal memory access errors at the start of training, but I think that's because support for old Tesla GPUs is just fading -- had the same issue with new pytorch trying to run any SD in full precision.
Viewing a single comment thread. View all comments