diviludicrum t1_j8bxeji wrote
Reply to comment by big_gondola in [R] [N] Toolformer: Language Models Can Teach Themselves to Use Tools - paper by Meta AI Research by radi-cho
I still think u/belacscole is right - this is analogical to the rudimentary use of tools, which can be done by some higher primates and a small handful of other animals. Tool use requires a sufficient degree of critical thinking to recognise a problem exists and select the appropriate tool for solving it. If done with recursive feedback, this would lead to increasingly skilful tool selection and use over time, resulting in better detection and solution of problems over time. Of course, if a problem cannot possibly be solved with the tools available, no matter how refined their usage is, that problem would never be overcome this way - humans have faced these sorts of technocultural chokepoints repeatedly throughout our history. These problems require the development of new tools.
So the next step in furthering the process is abstraction, which takes intelligence from critical thinking to creative thinking. If a tool-capable AI can be trained on a dataset that links diverse problems with the models that solve those problems and the process that developed those models, such that it can attempt to create and then implement new tools to solve novel problems, then assess its own success (likely via supervised learning, at least at first), we may be able to equip it with the “tool for making tools”, such that it can solve the set of all AI-solvable problems (given enough time and resources).
uristmcderp t1_j8db0gw wrote
The whole assessing its own success is the bottleneck for most interesting problems. You can't have a feedback loop unless it can accurately evaluate if it's doing better or worse. This isn't a trivial problem either, since humans aren't all that great at using absolute metrics to describe quality, once past a minimum threshold.
ksatriamelayu t1_j8ebpx4 wrote
Do people use things like evolutionary fitness + changing environments to describe those quality? Seems dynamic environment might be the answer?
Oat-is-the-Best t1_j8ef5x0 wrote
How do you calculate your fitness? That has the same problem of a model not being able to assess its own success
LetterRip t1_j8dpgxc wrote
There are plenty of examples of tool use in nature that don't require intelligence. For instance ants,
https://link.springer.com/article/10.1007/s00040-022-00855-7
The tool use being demonstrated by toolformer can be purely statistical in nature, no need for intelligence.
thecodethinker t1_j8dpuru wrote
It is purely statistical, isn’t it?
LLMs are statistical models after all.
Viewing a single comment thread. View all comments