Submitted by yeah_i_am_new_here t3_126tuuc in Futurology

Looking to discuss this premature thought I'm having.

As a precursor to this thought experiment, I'd like to say that I'm pushing aside the ethics of developing a functional AGI, and thinking in the vein of "it's already happening, regardless of my ethical dilemmas on the subject".

So.

What is AGI, really? In my understanding, AGI is the representation of generalized human cognitive abilities in software so that, faced with an unfamiliar task, the AGI system could find a solution.

If we can agree on that definition (and that's a big if), then it seems to be true to me that if we were to give gpt-X autonomy over their "bodies", an AGI could exist today. Even if it's not "actual" AGI and you could argue it's already familiar with most tasks due to the nature of it's training, it would just need to seem enough like an AGI to fool us (this brings up another question, does AGI need emotion to be what we would consider and AGI?) For example, a multimodal humanoid bot could walk around, gather information with visual & haptic sensors, and find problems**. After diagnosing a problem, it could compute x number of solutions, then enact them on the physical world, and repeat. (The contents of the "problem" and "solution" here are ambiguous on purpose, as I believe that draws towards the ethical side of this thought experiment, which I am ignoring for the sake of having a clearer discussion about how close we are to this actual thing happening)

I feel as though we're only a couple of exceptionally significant upgrades in hardware (battery, memory, compute power) away from the scenario I described above. I'm by no means an expert in robotics, but with recent developments at some of the most popular robotics labs around the US, we don't seem too far from giving a bot at Boston Dynamics access to gpt-X (3, 4, 5, etc) and letting it run loose on the world, "solving problems".

In short, it may be that solving LLMs is solving AGI, as language is the medium through which we operate within our society. Giving an AI access to our language and giving it physical autonomy (with some unprecedented hardware advancement) allow an AI actor to participate in our society, just as a new person would.

I'd love to discuss some counter points / criticisms + follow up thoughts.

**This is where my thought falls apart - I don't know if it's possible for gpt-X (or any other LLM/neural net/software) would have the initiative to "solve problems" without the explicit direction to do so. I have one potential idea. Perhaps you could give it the instruction to work with / collaborate with people, and perhaps that's how we (people, without AGI or a codebase) function anyway - ie, if there were no people to talk to and no society to partake in, we would lay dormant in a dark room the same way an AGI bot would when it's given no initiative.

2

Comments

You must log in or register to comment.

samwell_4548 t1_jeashtb wrote

One issue is that LLM's cannot actively learn from their surroundings, they need to be trained prior to use. This is very different to how human brains work

7

elehman839 t1_jebley2 wrote

Yes, and I think this reflects an interesting "environmental" difference experienced by humans and AIs.

Complex living creatures (like humans) exist for a long time in a changing world, and so they need to continuously learn and adapt to change. Now, to some extent, we do follow the model of, "Spend N years getting trained and then M years reaping the benefit", but that's only a subtle shift in emphasis, not a black-and-white thing as for ML training vs. inference.

In contrast, AI developed largely for short-term, high-volume applications. In that setting, it makes sense to spend spend a lot of upfront time on training, because you're going to effectively clone the thing and run it a billion times, amortizing the training cost. And giving it continuous learning ability isn't that useful, because each application lasts only minutes, seconds, or even milliseconds.

Making persistent AI that continuously learns and remembers seems like a cool problem! I'm sure this will require some new ideas, but with the number of smart people now engaged in the area, I bet those will come quickly-- if there's sufficient market demand. An I can believe that there might be...

3

yeah_i_am_new_here OP t1_jebnrn5 wrote

Well put! To piggy back off your point, I think the persistence issue in it's current state is what will ultimately stop it from taking over too many knowledge worker jobs. The efficiency it currently creates for each current knowledge worker will of course be a threat to employment if production doesn't increase as well, but if history is at all trustworthy, production will increase.

I think the biggest issue right now (outside of data storage) for creating AI that is persistent in it's knowledge is the algorithm to receive and accurately weigh new data on the fly. You could say it's the algorithm for wisdom, even.

1

yeah_i_am_new_here OP t1_jeatis3 wrote

Interesting. So then we can suppose that if you had enough of these humanoids walking around, they could gather data and feed it back into a "hive mind" (as much as I hate that saying), and retrain the software running the humanoid with that new data, basically giving it a chance to "learn".

I see many hardware limitations with this possibility, but it's an interesting thought.

Perhaps another interesting thought based off of yours is, how much brand new data in our surroundings (that's not already trained on the internet) do you suppose exists in the world?

0

Mercurionio t1_jebcy9s wrote

The question is how machine will iterate the stuff. Like, it gets new info about surroundings and add to the code immediately and completely changing it's behavior on the outcome. Or just collects the data and then reprocess words into bigger salad.

Currently, gpt4 can return to original incorrect answers because it keeps iterating the salad until the user is satisfied.

1

NotACryptoBro t1_jeaya8c wrote

GPT is just building rows of words based on probabilities. You guys are giving all that way too much credit. Please first learn about how AI / machine learning works and start discussions after that.

6

yeah_i_am_new_here OP t1_jeazz8i wrote

I am familiar with how these transformers work and I'm not suggesting that anything is conscious here. Truthfully, I don't think we can create consciousness, if that's what you received from my post. The fact of the matter is that our nature of communication can be defined by matrices of probabilities and gpts illustrate this pretty damn well. Therefore, it stands to reason that other perceptive abilities & routines we may have as people can also be defined by matrices of probabilities, and enacted by something not human. Since you seem to be an expert in AI / ML, do you think this is true?

2

ninjadude93 t1_jeb7jjw wrote

I see everyone saying something along the lines of humans communicate/think in the same way chatgpt/NNs comes up with blocks of text but thats just not true. Chatgpt is stochastic, you can get two different outputs from the same simple input. When I'm writing this reply to you I'm not just picking the most likely string of words I'm sitting here considering each word I want to say. As far as I know LLMs by design are incapable of this kind of reasoning

5

yeah_i_am_new_here OP t1_jeb94fc wrote

I agree, I'm just not convinced of any evidence that the reasoning that went into your response is integral to the validity of the response itself. So basically, my argument is that whether or not LLMs can reason isn't really that important, because the output is compelling either way. I'd like to believe that there's some magic in our capability to reason that makes the world run a little better, but I just don't know

−1

ninjadude93 t1_jecln8c wrote

If you dont have a system capable of logical reasoning you dont have an AGI

1

SlurpinAnalGravy t1_jebnkt4 wrote

Your whole premise predicated on the idea that AGI is even a potential outcome from it.

Your logic was built on fundamental misunderstandings and presuppositions that the outcome was a possibility.

Don't get mad at people for pointing out your flaws.

0

yeah_i_am_new_here OP t1_jebubwh wrote

Can't tell if you're trolling or not, but nobody's mad here! Just looking for a discussion to throw around some thought provoking ideas. I have a good question for you. How would you know AGI if you saw it? What would be a defining factor that makes it obvious that a system has reached that level?

0

SlurpinAnalGravy t1_jebuobt wrote

Your assumption is that AGI is an AI that broaches the singularity, correct?

0

Shiningc t1_jebq09p wrote

Well think of it like this. If you have somehow acquired a scientific paper from the future that's way more advanced than our current understanding of science, you still won't be able to decipher it until you've personally understood it using reasoning.

If an AI somehow manages to stumble upon a groundbreaking scientific paper and hand it to you, you still won't be able to understand it, and more importantly, neither does the AI.

0

yeah_i_am_new_here OP t1_jebwqnk wrote

I think I see what you're saying. I'm gonna try and simplify it for my caveman brain so I know we're on the same page, and then pose a question for you -

1 - i read a scientific paper from the year 3023 with new info and new words (or combinations of words, for ex, if I read the words "string theory" in the 1930s, I'd have no idea what to do with it) with new meanings/ideas that really haven't been in existence before this time 2 - no matter how much I read it, I really just won't understand how these new concepts and words connect to my legacy concepts and words, until someone reasons out for me what those new words and concepts mean or I, say "get creative" and figure it out for myself 3 - I study that connection between the old concepts and new concepts until I have a clear understanding and roadmap of the connection between them

So what I'm getting from your comment is that AI really can't do step two, but I, a human, can. But - I'd propose that the only way to do step 2 is by using the current roadmap I have and proposing new solutions, then testing them to see if they align with the solution (maybe oversimplifying here).

So my question for you is, to determine the truth of the process in step 2, is it testing or proposing new solutions that limits AI?

0

Shiningc t1_jec0je6 wrote

I mean, since the AI can't "reason", they can only propose new solutions randomly and haphazardly. And well, that may work in the same way that the DNA has developed without the use of any reasoning.

But I think what the humans are doing is that they're doing that inside of a virtual simulation that they have created in their minds. And well, since the real world is apparently a rational place, that must require reasoning. This makes us not even have to bother testing in the real world, because we can do it in our minds. And that's why a lot of things are not necessarily tested, because we can reason that it "makes sense" or it "doesn't make sense" and we know that it must fail the test.

When we make a decision and think about the future, that's basically a virtual simulation that requires a complex chain of reasoning. If an AI were to become autonomous to be able to make a complex decision on its own, then I would think that the AI would require a "mind" that works similar to ours.

1

yeah_i_am_new_here OP t1_jecg2aw wrote

I love the comparison to how DNA has developed. Definitely a great parallel to draw there that I haven't heard before - what a thought!! I agree with everything you're saying. Thanks for the thoughtful replies!

0

NotACryptoBro t1_jedh4ec wrote

>Truthfully, I don't think we can create consciousness, if that's what you received from my post.

That's actually what I thought, because you wrote that "In my understanding, AGI is the representation of generalized human cognitive abilities in software so ..."

1

yeah_i_am_new_here OP t1_jee8f2h wrote

I guess if it's true that consciousness is a cognitive ability, but I don't really think we have any idea what consciousness is or where it comes from. I guess "most likely" it's some kind of cognitive ability, so I hear you there, but I leave that out of my idea of AGI because it's all conjecture. For all I know consciousness comes from your liver.

1

NotACryptoBro t1_jeewqdm wrote

>but I don't really think we have any idea what consciousness is or where it comes from

Or if we only think it exists, idk

2

wiredwalking t1_jeb6g1d wrote

I mean, all the human brain is is just neurons firing. The Economy is just individuals doing their job. Great things can come from simple, collective mechanisms. Put enough hydrogen atoms together and they start to think about themselves.

0

Shiningc t1_jebo0qr wrote

The problem is we don't know how that simple mechanism works. It took a while for someone to come up with the simple idea of gravity or evolution via natural selection.

1

Cerulean_IsFancyBlue t1_jecf72b wrote

The human brain is also only one of the systems involved in human actions and decision making. I’m not talking about any kind of spiritual stuff. I mean actual systems that influence brain chemistry.

There are areas of cognition in which is quite possible that important decisions are being made outside the brain, and our executive function rationalizes the decision like Mayor Quincy running to the front of a protest to “lead” it.

I think one great layperson introduction to this kind of systems interaction is contained in the book Gut (Giulia Enders).

I don’t know if we literally need to simulate each subsystem, but it does lead me to believe is that we don’t yet understand the system that we are trying to model. It isn’t just neurons, and “just neurons” is hard enough.

That said, there’s a lot to be achieved by throwing more more power at the problem. Many problems in the realm of imitating humans, from playing chess to visual recognition systems, were not defeated by specialized approaches but eventually fell to sheer processing power. For me this means X is probably 5+ generations, and a lot of that is simply because I can’t picture what the future looks like further down the road than that

1

NotACryptoBro t1_jedh74s wrote

>Great things can come from simple, collective mechanisms

That's the point: the brain isn't simple. Last breakthrough was a complete map of a worm's 'brain'

1

SlurpinAnalGravy t1_jebmqb4 wrote

Every time I mention this and tell people their fearmongering is unnecessary, I get a dozen idiots saying I'm wrong. This sub isn't worth debating anyone in, just allow the same ~1k boomer doomers to jack eachother off, at least they have a quarantined little bubble to do it in.

0

NotACryptoBro t1_jedgztg wrote

"I don't think you understand how it works, it's not a simple auto complete. The people making these models don't even understand how it works anymore, how would you?"

That's OP's response :D

edit: it wasn't OP, just a random know-it-all

0

alecs_stan t1_jeh0h1z wrote

Yeah, tell that to translators.

0

SlurpinAnalGravy t1_jeh3u2w wrote

Was literally a cryptologic linguist while enlisted and did dodic terp work.

What did you want to tell me?

1

[deleted] t1_jecpjgy wrote

[deleted]

0

NotACryptoBro t1_jedgx94 wrote

> The people making these models don't even understand how it works anymore, how would you?

Haha, good one. Dunning Kruger in full action

0

Shiningc t1_jebniwc wrote

LLM is just a bunch of statistics, and it can't generate anything new.

The unsolved problem of human cognition/AGI has always been that it has the ability to solve a problem that it has not been able to solve before. I.e., creativity.

1

Cerulean_IsFancyBlue t1_jecghpo wrote

So this is interesting. On the one hand, I am very pessimistic, that we are anywhere close to achieving a human intelligence and cognition. I don’t think it possesses intuition or feelings, or any of the things that you might think are necessary for true creativity.

But … this might be a generation of AI that is actually better at creativity than it is at being factual and correct. Language generation has the ability to produce sentences that are plausible and coherent. But without some kind of additional subsystem it’s actually not very good at fact checking. So it’s possible that this tool will be a boost to human creativity by being able to generate tons of alternatives and variations on ideas, and and not a boost to human accuracy or precision like many previous generations of “Thinking Machines” have done.

GPT is less “calculator” and more “crazy friend who spits out inspiring nonsense”. It produces fanciful novel output — made of the things you put into it, rearranged. But it does so in such a powerful way, drawing on such a wealth of examples, that the output can actually feel creative. Usually it’s creative via an existing style, so it’s a derivative sort of creativity, if that’s not an oxymoron.

But anyway. I find it interesting that in terms of how this to boost human abilities, it’s more of a creativity boost.

2

alecs_stan t1_jeh08iw wrote

We don't need new hardware, at this point it's just a computer science problem

1