Submitted by LanchestersLaw t3_1253kns in MachineLearning
WindForce02 t1_je4zh7m wrote
Reply to comment by lostmsu in [D] Prediction time! Lets update those Bayesian priors! How long until human-level AGI? by LanchestersLaw
I don't know if IQ is exactly a good metric here because LLMs merely replicate training data so it would be likely that the training data (which is very big) contains information regarding IQ tests. It would be an indirect comparison because you'd be comparing sheer training data amount with a person's ability to produce thoughts. It would be way more interesting to give GPT4 complex situations that require advanced problem solving skills. Say you got a message that you need to decode and it has multiple layers of encryption and you only have a few hints on how you might go about it, since there's no way to replicate responses based on previous training data I'd be curious to see how far it gets, or let's say a hacking CTF, which is something that not only takes pure coding skill, but also a creative thought process.
lostmsu t1_je78jfg wrote
You are missing the idea entirely. I am sticking to the idea of the original Turing test to determine if AI is human-level already or not yet.
The original Turing test is dead simple and can be applied to ChatGPT easily.
The only other thing in my comment is that "human-level" is vague, as intelligence differs from human to human, which allows for goalpost moving like in your comment. IQ is the best measure of intelligence we have. So it is reasonable to turn the idea of Turing test into a plethora of different tests Turing(I)
which is like any regular Turing test, but the IQ of the humans participating in the tests (both machine's opponent, and the person who needs to guess which one is the machine) is <= I
.
My claim is that I believe ChatGPT or ChatGPT + some trivial form of memory enhancements (like feeding previous failures back into prompts) quite possibly can already pass Turing(70)
.
Viewing a single comment thread. View all comments