Viewing a single comment thread. View all comments

gelukuMLG t1_j9kftza wrote

3

dwarfarchist9001 t1_j9knt85 wrote

It was shown recently that for LLMs ~0.01% of parameters explain >95% of performance.

5

gelukuMLG t1_j9kxnj4 wrote

But higher parameters allow for broader knowledge right? You can't have a 6-20B model have broad knowledge as a 100B+ model, right?

1

Ambiwlans t1_j9lab3g wrote

At this point we don't really know what is bottlenecking. More params is an easyish way to capture more knowledge if you have the architecture and the $$... but there are a lot of other techniques available that increase the efficiency of the parameters.

9

dwarfarchist9001 t1_j9lb1wl wrote

Yes but how many parameters must you actually have to store all the knowledge you realistically need. Maybe a few billion parameters is enough to store the basics of every concept known to man and more specific details can be stored in an external file that the neural net can access with API calls.

5

turnip_burrito t1_j9kgb2q wrote

We already knew parameters aren't everything, or else we'd just be using really large feedforward networks for everything. Architecture, data, and other tricks matter too.

3