Submitted by Business-Lead2679 t3_1271po7 in MachineLearning
Rei1003 t1_jecdo3h wrote
What's the point of these 10b models? I think now it seems more reasonable to work with 100b models (api) or 1b models.
Business-Lead2679 OP t1_jecfagu wrote
The main point of these open-source 10b models is to make them fit on an average consumer hardware, while still providing great performance, even offline. A 100b model is hard to train because of it's size, and even harder to maintain on a server that is powerful enough to handle multiple requests at the same time, while providing good response generation speed. Not to mention how expensive this can be to run. When it comes to 1b models, they usually do not achieve a good performance, as they do not have enough data. Some models with this size are good, yes, but a 10b model is usually significantly better, if trained correctly, and can still fit on a consumer hardware.
Viewing a single comment thread. View all comments