NikoKun t1_iw04ud9 wrote
Reply to comment by lughnasadh in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
> It's worth noting The Turing Test is considered obsolete. It only requires an AI to appear to be intelligent enough to fool a human. In some instances, GPT-3 already does that with some of the more credulous sections of the population.
That depends more on the human, the specifications of said Turing Test, and how thoroughly it's performed. What would be the point of conducting a Turing Test using a "credulous" interviewer? lol
If we're talking about an extended-length test, conducted by multiple experts who understand the concepts and are driven to figure out which participant is AI.. I don't think GPT-3 could pass such a test, at least not for more than a few minutes, at best.. heh
Reddituser45005 t1_iw0o0t5 wrote
The Turing Test was developed in the 1950’s. I suspect Alan Turing would be amazed by the progress of modern computers. He certainly never imagined a machine having access to a world wide library of the collected works of humanity. His test idea was a conversation between an evaluator and two other participants- one a machine and one a human. The evaluators job is to determine the human from the machine. By modern standards, that can be done. We’ve all heard of the Google engineer who believed his AI was conscious. The challenge now is to determine what constitutes “understanding”. AI’s can create art, engage in conversation, solve problems, manage massive amounts of information, and are increasingly challenging our ideas of what constitutes intelligence.
Fun-Requirement9728 t1_iw31urt wrote
Is it an actual "test" or a theoretical test concept? I was under the impression it was just the idea of a test for testing AI but not like there is a specific set of questions.
Eli-Thail t1_iw204eg wrote
>His test idea was a conversation between an evaluator and two other participants- one a machine and one a human. The evaluators job is to determine the human from the machine. By modern standards, that can be done.
An easy way to tell the difference is to ask the exact same question twice. Particularly one that requires a length answer.
The AI will attempt to answer again, but no matter how convincing or consistent it's answers might be, the human will be the one that tells you to fuck off because they're not telling you their life story again.
MintyMissterious t1_iwg478j wrote
Using the Turing Test for this was always nonsense, as it never had anything to do with intelligence, but matching a human perception of what machines can't or won't do. And that critically includes mistakes.
Make the machine make typos, and scores go up.
There's a reason Alan Turing called it the "imitation game" and never claimed it measures intelligence.
In my eyes, it measures human credulity.
Viewing a single comment thread. View all comments