Submitted by cccntu t3_1182fqd in MachineLearning
Hey r/MachineLearning! I wanted to share a new PyTorch library I've been working on that I think could be really useful for anyone looking to fine-tune large models with LoRA.
https://github.com/cccntu/minlora
The library is based on the LoRA technique (Low-Rank Adaptation). "which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer." (- quote from the paper)
With this library, you can easily apply LoRA to any PyTorch model with just a few lines of code.
One of the benefits of this library is that it's really small - just 100 lines of code. Despite its size, it's quite powerful and has been tested on a variety of different models, including nanoGPT by Karpathy, and stable diffusion.
It also features an easy-to-use interface that allows you to serve multiple LoRA models at the same time!
pyepyepie t1_j9fj0uf wrote
WOW - what a cool idea (the paper), I was not aware it exists! Thank you so much for the simple implementation and the info.