Submitted by blacklemon67 t3_11misax in MachineLearning
Hey everyone!
A quick fermi estimate shows that if a person were to encounter 50,000 tokens a day (extremely high estimate, this is a novel per day assuming 1 token = 1 word) then by the time they are 20 they would have encountered 365 million tokens.
Obviously this person would be VERY well read. However, if we feed a transformer language model with the same number of tokens then according to scaling laws it would be worse than gpt-2 (which was trained with a dataset about an order of magnitude larger).
So the question is, why do language models need so many tokens? Does anyone know of any review papers/blog posts discussing this observation?
My theory is that we haven't yet found the most efficient architecture for language yet, and that transformers' ability to excell at many different tasks means that you need to give it a lot of data to force it to come up with the right neural circuits for the job.
TLDR: Humans need substantially fewer tokens than transformer language models. What's the current understanding for why this is?
harharveryfunny t1_jbjhmif wrote
Humans don't learn by locking themselves in a room at birth with a set of encyclopedias, or a print-out of the internet. We learn by interaction with the world - perceive/generalize/theorize/experiment, learn from feedback, etc.
It's impressive how well these LLM's perform given what is really a very tough task - build an accurate world model given only "predict next word" feedback, but hardly surprising that they need massive amounts of data to compensate for the task being so tough.