Submitted by HPCAI-Tech t3_ysfimk in MachineLearning
Hey folks. We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning. It help reduce the pretraining cost by 6.5 times, and the hardware cost of fine-tuning by 7 times, while simultaneously speeding up the processes.
Open source address: https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
Glad to know your thoughts about our work!
Flag_Red t1_iw1lntd wrote
It's mentioned a few times in the articles/readme for this tool that it enables fine tuning on consumer hardware. Are there any examples of doing something like this? How long of fine tuning on a 3080 (or something) does it take teach the model a new concept? What sort of dataset is needed? Comparison to something like DreamBooth?
I'd love to try fine tuning on some of the datasets I have lying around, but I'm not sure where to start, or even if it's really viable on consumer tech.