cccntu

cccntu OP t1_j9jz6ov wrote

This project started out as me exploring if PyTorch parametrizations could be used to do LoRA, and it turned out perfect for this task! And I simply wanted to share that.
I think it would be interesting to see it integrated into PEFT, too. Although they already have their own LoRA implementation there.

1

cccntu OP t1_j9hsouu wrote

Theirs requires you to rewrite the whole model and replace every layer you want to apply LoRA to with the LoRA counterpart, or use monky-patching.Mine utilizes PyTorch parametrizations to inject the LoRA logic to existing models. If your model has nn.Linear, you can call add_lora(model) to add LoRA to all the linear layers. And it's not limited to Linear, you can see how I extended it to Embedding, Conv2d in a couple lines of code. https://github.com/cccntu/minLoRA/blob/main/minlora/model.py

9