Submitted by cccntu t3_1182fqd in MachineLearning
brucebay t1_j9g8al3 wrote
Thank you for this. I never used lora except part of stable diffusion training. You linked MS lora lib too. What are the differences between yours and theirs?
cccntu OP t1_j9hsouu wrote
Theirs requires you to rewrite the whole model and replace every layer you want to apply LoRA to with the LoRA counterpart, or use monky-patching.Mine utilizes PyTorch parametrizations to inject the LoRA logic to existing models. If your model has nn.Linear, you can call add_lora(model) to add LoRA to all the linear layers. And it's not limited to Linear, you can see how I extended it to Embedding, Conv2d in a couple lines of code. https://github.com/cccntu/minLoRA/blob/main/minlora/model.py
mdda t1_j9s8ptw wrote
FWIW, I gave a shout out to minLoRA at our Machine Learning MeetUp (in Singapore) last night : https://redcatlabs.com/2023-02-23_MLSG_Frameworks/#/15/2
cccntu OP t1_j9tfr5r wrote
Thanks for the shout out!
brucebay t1_j9i2kd6 wrote
Thank you for this clear explanation.
Viewing a single comment thread. View all comments