Submitted by purepersistence t3_10r5qu4 in singularity
purepersistence OP t1_j6w2xl7 wrote
Reply to comment by CertainMiddle2382 in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
Starting with language is a great way to SIMULATE intelligence or understanding by grabbing stuff from a bag of similar text that's been uttered by humans in the past.
The result will easily make people think we're ahead of where we really are.
CertainMiddle2382 t1_j6wwyvp wrote
“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”
In all honesty, I don’t really know if Im really thinking/aware, or just a biological neural network interpreting itself :-)
purepersistence OP t1_j6x005a wrote
>“If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”
The problem is people believe that. With chatGPT it just ain't so. I've given it lots of coding problems. It frequently generates bugs. I point out the bugs and sometimes it corrects them. The reason they were there to begin with is it didn't have enough clues to grab the right text. Just as often or more, it agrees with me about the bug but it's next change fucks up the code even more. It has no idea what it's doing. But it's still able to give you a very satisfying answer to lots and lots of queries.
Viewing a single comment thread. View all comments