Submitted by Just-A-Lucky-Guy t3_1200joq in Futurology
Procrasturbating t1_jdgllfs wrote
Now just hope that it never develops its own motives and extends its capabilities without permission. We would never keep up with it.
ErikTheAngry t1_jdihv7w wrote
It's a LLM. Not a general intelligence.
All it effectively does is correlate, retrieve, and extrapolate existing information. It does not generate new data.
Pligget t1_jdiws01 wrote
i0i0i t1_jdmj6q5 wrote
We don’t have a rigorous definition of intelligence. How sure are you that you’re ever being truly creative? Next time you’re talking to someone, as your speaking pay close attention to the next word that comes out of your mouth. Where did it come from? When did you choose that specific word to follow the previous? What algorithm is being followed in your brain that resulted in the choice of that word? The fact is that we don’t know, and not having a real understanding human intelligence should make us at least somewhat open to the possibility that an artificial system that is quickly becoming indistinguishable from an intelligent agent may in fact be or become an intelligent agent.
ErikTheAngry t1_jdn193d wrote
We don't really need a rigorous definition, when we already have a general definition that it fails.
Intelligence is the ability to gain and apply knowledge and skills.
You're very right that human behaviour involves a lot of mimicry. I've noticed more than just words being influenced in my behaviour, when I'm getting to know someone. Part of that is an evolved behaviour intended to aid in socialization (as humans are social creatures).
I write code every now and then while I'm working. That code is from scratch. I'm applying knowledge to solve a task. And I choose coding, specifically, because ChatGPT is remarkably good at developing code.
Until it isn't. It makes mistakes, because it's just regurgitating code that seems to fit. It can get me 80% of the way there, and it's a wonderful tool for that, but that other 20% has to be corrected because it doesn't understand what the code does, it's just "copying and pasting" (and that's an oversimplification, but only slightly so).
The difference between my coding and ChatGPT's coding is that when I read code, I know what I'm trying to do. I can apply my knowledge to say "this will work" or "this won't work" or "what the fuck is this?" even before I even try to compile.
i0i0i t1_jdnfsy0 wrote
I think we do need a rigorous definition. Otherwise we’re stuck in a loop where the meaning of intelligence is forever updated to mean whatever it is that humans can do that software can’t. The God of the gaps applied to intelligence.
What test can we perform on it that would convince everyone that this thing is truly intelligent? Throw a coding challenge at most people and they’ll fail, so that can’t be the metric. We could ask it if it’s afraid of dying. Well that’s already been done - the larger the model size the more likely it is to report that it has a preference to not be shut down (without the guardrails put on after the fact).
All that to say that I disagree with the idea that it’s “just” doing anything. We don’t know precisely what it’s doing (from the neural network perspective) and we don’t know precisely what the human brain is doing, so we shouldn’t be quick to dismiss the possibility that what often seems to be evidence of true intelligence actually is a form of true intelligence.
ErikTheAngry t1_jdnzzie wrote
I mean... if you want a rigorous definition of intelligence to compare it to, then I guess you'll have to start there, and then when it's broadly accepted as a thing, we can compare it to that.
For now, with the definitions we do have, it's not intelligent. It's just a retrieval system, with no more intelligence than my filing cabinet.
Pligget t1_jdix3rl wrote
As I pointed out to someone who recently responded to you, this excellent article points out that such models are already power-seeking.
Viewing a single comment thread. View all comments