Submitted by __ingeniare__ t3_zj8apa in singularity
This is something I have thought about recently in light of the latest advances in AI. We often talk about achieving Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) as two different goalposts that will be separated by years or decades in time. I don't think this is correct, as I will explain below.
Clearly, we have already achieved narrow super intelligence - Chess, Go, many classification tasks, hell I'd argue ChatGPT has a better sense of humour than most humans. AI art is, despite what its opponents might say, better than what most people could make on their own. ChatGPT knows more facts about the world than any human alive, even if it sometimes gets those facts wrong. Recently released AlphaCode performed better than the majority of humans in the programming contest it was evaluated for. The number of humans that outperform AI on any one given task is rapidly shrinking.
I view the emergence of AGI during the coming years or decades as a convergence of these narrow domains. As the domains fuse, the result won't be an AI that is simply human level - it will instantaneously be super human. By the time we have a single AI that can do everything a human can do, it will also do those things much, much better than humans, including reasoning, problem solving, and general intelligence tasks. In other words, AGI and ASI go hand-in-hand. As we are developing the former, we are simultaneously developing the latter.
jlpt1591 t1_izu3g1i wrote
Hard agree