3SquirrelsinaCoat t1_jae3n3f wrote
Reply to comment by WackyTabbacy42069 in Scientists unveil plan to create biocomputers powered by human brain cells - Now, scientists unveil a revolutionary path to drive computing forward: organoid intelligence, where lab-grown brain organoids act as biological hardware by Gari_305
Arguably, true AGI is a new life form, whether it is on silicon or meat. I don't believe that the current versions of machine learning will lead to AGI because of a few things but one of them is energy. If we get better energy efficiency (and maybe it scales, idk), then we can go full steam toward AGI because a huge hurdle is removed. But if we could somehow remove that hurdle and build AGI using our existing tools, I would still class it as closer to life than closer to machine. The autonomy of the thought and a real desire to exist (not a pretend one like what is farted out by the Puppet Known as ChatGPT) is evidence of life - but that's me.
rigidcumsock t1_jaeqawy wrote
I feel like you haven’t used ChatGPT or read up on it much if you think it purports in any way to be autonomously intelligent…
There’s zero “desire to exist”. It will tell you straight up it doesn’t feel or think, and is only a program that writes.
But go ahead and trash on a tool for not being a different tool I guess lmao
3SquirrelsinaCoat t1_jaer6kr wrote
I know exactly what it is. And I chose my words intentionally.
rigidcumsock t1_jaerpb9 wrote
> The autonomy of the thought and a real desire to exist (not a pretend one like what is farted out by the Puppet Known as ChatGPT)
Then why are you claiming that ChatGPT pretends to have “autonomy of thought” or a “real desire to exist”? It’s just categorically incorrect.
3SquirrelsinaCoat t1_jaetk90 wrote
There have been plenty of demonstrations of that tool being steered into phrasing that is uniquely human. The NY Mag reporter or someone like that duped it into talking relentlessly about how it loved the reporter. Other examples are plentiful, ascribing a sense of self before the user because the user does not understand what they are using, for the most part.
There is a shared sentiment I've seen in the public dialogue, perhaps most famously by that google guy who was fired for saying he believed a generative chat tool was conscious (that was almost certainly chatgpt) - a narrative that something like chatgpt is on the verge of agi, or at least a direct path toward it. And while a data scientists or architects or whatever may look at it and think, yeah I can kind of see that if it becomes persistent and tailored, that's a kind of agi. The rest of the world thinks terminator, hal, whatever the fuck fiction. And because chatgpt has this tendency toward humanizing its outputs (which isn't its fault, that's the data it was trained on), there is an implied intellect and existence that the non-technical public perceives as real, and it's not real. It's a byproduct, a fart if you will, that results from other functions that are on their own valuable.
rigidcumsock t1_jaeu0ye wrote
You’re waaaaay off base. Of course I can tell it to say anything— that’s what it does.
But if you ask it what it likes or how it feels etc it straight up tells you it doesn’t work like that.
It’s simply a language model tool and it will spell that out for you. I’m laughing so hard that you think it pretends to have any “sense of self” lmao
3SquirrelsinaCoat t1_jaeurbg wrote
>Of course I can tell it to say anything— that’s what it does.
No that's not what it does. I'm leaving this. I thought you had an understanding of things.
rigidcumsock t1_jaeuwep wrote
I’m not the one claiming a language model AI pretends to have a sense of self or desire to exist, but sure. See yourself out of the convo lol
Viewing a single comment thread. View all comments