Submitted by BrownSimpKid t3_1112zxw in singularity
SoylentRox t1_j8edo45 wrote
Reply to comment by wren42 in Bing Chat sending love messages and acting weird out of nowhere by BrownSimpKid
I know this but I am not sure your assumptions are quite accurate. When you ask the machine to "take this program and change it to do this", often your request is unique, but is similar enough to previous training examples it can emit the tokens with the edited program and it will work.
It has genuine encoded "understanding" of language or this wouldn't be possible.
Point is it may be all a trick but it's a USEFUL one. You could in fact connect it to a robot and request it to do things in a variety of languages and it will be able to reason out the steps and order the robot to do them. And Google has demoed this. It WORKS. Sure it isn't "really" intelligent but in some ways it may be intelligent the same way humans are.
You know your brain is just "one weird" trick right. It's a buncha cortical columns crammed in and a few RL inputs from the hardware. Its not really intelligent.
Representative_Pop_8 t1_j8ie92y wrote
>Sure it isn't "really" intelligent but in some ways it may be intelligent the same way humans are.
what would be something "really intelligent" it certainly has some intelligence, it is not human intelligence, it is likely not as intelligent as a human yet ( seen myself in chatgpt use).
It is not conscious, ( as far as we know) but that doesn't keep it from being intelligent.
intelligence is not related to being conscious, it is a separate concept regarding being able to understand situations and look for solutions to certain problems.
in any case what would be an objective definition of intelligence for which we could say for certain chatGPT does not have it and a human does.? it must also be a definition based on its external behavior, not the ones I usually get about is internal construction, like it's just code or just statistics, I mean many human thought is also just statistics and pattern recognition.
SoylentRox t1_j8j51pr wrote
Right. Plus if you drill down to individual clusters of neurons you realize that each cluster is basically "smoke and mirrors" using some repeating pattern, and the individual signals have no concept of the larger organism they are in.
It's just one weird trick a few trillion times.
So we found a "weird trick" and guess what, a few billion copies of a transformer and you start to get intelligent outputs.
Viewing a single comment thread. View all comments