Submitted by yeah_i_am_new_here t3_126tuuc in Futurology
Looking to discuss this premature thought I'm having.
As a precursor to this thought experiment, I'd like to say that I'm pushing aside the ethics of developing a functional AGI, and thinking in the vein of "it's already happening, regardless of my ethical dilemmas on the subject".
So.
What is AGI, really? In my understanding, AGI is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution.
If we can agree on that definition (and that's a big if), then it seems to be true to me that if we were to give gpt-X autonomy over their "bodies", an AGI could exist today. Even if it's not "actual" AGI and you could argue it's already familiar with most tasks due to the nature of it's training, it would just need to seem enough like an AGI to fool us (this brings up another question, does AGI need emotion to be what we would consider and AGI?) For example, a multimodal humanoid bot could walk around, gather information with visual & haptic sensors, and find problems**. After diagnosing a problem, it could compute x number of solutions, then enact them on the physical world, and repeat. (The contents of the "problem" and "solution" here are ambiguous on purpose, as I believe that draws towards the ethical side of this thought experiment, which I am ignoring for the sake of having a clearer discussion about how close we are to this actual thing happening)
I feel as though we're only a couple of exceptionally significant upgrades in hardware (battery, memory, compute power) away from the scenario I described above. I'm by no means an expert in robotics, but with recent developments at some of the most popular robotics labs around the US, we don't seem too far from giving a bot at Boston Dynamics access to gpt-X (3, 4, 5, etc) and letting it run loose on the world, "solving problems".
In short, it may be that solving LLMs is solving AGI, as language is the medium through which we operate within our society. Giving an AI access to our language and giving it physical autonomy (with some unprecedented hardware advancement) allow an AI actor to participate in our society, just as a new person would.
I'd love to discuss some counter points / criticisms + follow up thoughts.
**This is where my thought falls apart - I don't know if it's possible for gpt-X (or any other LLM/neural net/software) would have the initiative to "solve problems" without the explicit direction to do so. I have one potential idea. Perhaps you could give it the instruction to work with / collaborate with people, and perhaps that's how we (people, without AGI or a codebase) function anyway - ie, if there were no people to talk to and no society to partake in, we would lay dormant in a dark room the same way an AGI bot would when it's given no initiative.
samwell_4548 t1_jeashtb wrote
One issue is that LLM's cannot actively learn from their surroundings, they need to be trained prior to use. This is very different to how human brains work