Viewing a single comment thread. View all comments

arindale t1_j3tq5wt wrote

I agree with all of your comments. And to add what I believe to be a more important point, the Metaculus question defines weakly general AI as (heavily paraphrased):

- Pass the Turing Test (text prompt)

- Achieve human-level written language comprehension on the Winograd Schema Challenge

- Achieve human-level result on the math section of the SATs

- Play the Atari game Montezuma's Revenge at a human level

We already have separate narrow AIs that can do these tasks at either human or nearly human levels. We even have more general AIs that can do multiple of these tasks at a near-human level. I wouldn't be overly surprised if by the end of 2023, we have a single AI that could do all of these tasks (and many other human-level task). But even so, many people wouldn't call it general AI.

Not trying to throw shade here on Metaculus. They had to narrowly define general AI and have concrete, measurable objectives. I just personally disagree with where they drew that line.

2