Submitted by Business-Lead2679 t3_1271po7 in MachineLearning
Business-Lead2679 OP t1_jecfagu wrote
Reply to comment by Rei1003 in [P] Introducing Vicuna: An open-source language model based on LLaMA 13B by Business-Lead2679
The main point of these open-source 10b models is to make them fit on an average consumer hardware, while still providing great performance, even offline. A 100b model is hard to train because of it's size, and even harder to maintain on a server that is powerful enough to handle multiple requests at the same time, while providing good response generation speed. Not to mention how expensive this can be to run. When it comes to 1b models, they usually do not achieve a good performance, as they do not have enough data. Some models with this size are good, yes, but a 10b model is usually significantly better, if trained correctly, and can still fit on a consumer hardware.
Viewing a single comment thread. View all comments