Submitted by fortunum t3_zty0go in singularity
sticky_symbols t1_j1gi6fn wrote
Reply to comment by fortunum in Hype bubble by fortunum
Here we go. This comment has enough substance to discuss. Most of the talk in this sub isn't deep or well informed enough to really count as discussion.
Perceptual and motor networks are making progress almost as rapidly as language models. If you think those are important, and I agree that they can only help, they are probably being integrated right now, and certainly will be soon.
I've spent a career studying how the human brain works. I'm convinced it's not infinitely more complex than current networks, and the co.putational motifs to get from where we are to brain like function are already understood by handfuls of people, and merely need to be integrated and iterated upon.
My median prediction is ten years to full superhuman AGI, give or take. By that I mean something that makes better plans in any domain than a single human can do. That will slowly or quickly accelerate progress as it's applied to building better AGI, and we have the intelligence explosion version of the singularity.
At which point we all die, if we haven't somehow solved the alignment problem by then. If we have, we all go on permanent vacation and dream up awesome things to do with our time.
PoliteThaiBeep t1_j1gowpp wrote
You know I've read a 1967 sci Fi book by a Ukrainian author where they invented a machine that can copy, create and alter human beings. And a LOT of discussion of what it could mean for humanity. As well as threat of SuperAI.
In a few chapters where people were talking and discussing events one of people was going on and on how computers will rapidly overcome human intelligence and what will happen then.
I found it... Interesting.
Since a lot of talks I had with tech people over the years since like 2015 were remarkably similar. And yet similarity with talks people had in 1960s are striking.
Same points " it's not a question of IF it's a question of when" Etc. Same arguments, same exponential talk, etc.
And I'm with you that.. but also a lot of us pretend or think they understand more than they possible do or could.
We don't really know when an intelligence explosion will happen.
1960s people thought it would happen when computers could do arithmetic million times faster than humans.
We seem to hang on to flops raw compute power, compare it vs human brain - and voila! - if it's higher we got super AI.
We've since long passed 10^16 flops in our supercomputers and yet we're still nowhere near human level AI.
Memory bandwidth kinda slipped away from Kurzwail books.
Maybe ASI will happen tomorrow. Or 10 years from now. Or 20 years from now or maybe it'll never happen we'll just sort if merge with it as we go without any sort of defining rigid event.
My point is - we don't really know. Flops progression was a good guess but it failed spectacularly. We have over 10^18 flops capable computers and we're still 2-3 orders of magnitude behind human brain when trying to emulate it.
sticky_symbols t1_j1i5gpt wrote
I agree that we don't know when. The point people often miss is that we have high uncertainty in both directions. It could happen sooner than the average guess, as well as later. We are now around the same processing power as a human brain (depending what aspects of brain function you measure), so it's all about algorithms.
Viewing a single comment thread. View all comments