Submitted by kkimdev t3_124er9o in MachineLearning
Nezarah t1_jdz1zqc wrote
For specifically personal use and research? And not commercial? LlaMA is a good place to start, and/or Alpaca 7B. Small scale (can run on most hardware locally), can be Lora trained and fine-tuned. Also has High token limits (I think it’s 2000 or so?).
Can have outputs comparable to GPT3 which can be further enhanced with Pre-Context training.
Can add branching functionality through the Langchain library.
Viewing a single comment thread. View all comments