Gimbloy t1_izuayb3 wrote
So you’re inclined to think the hard takeoff scenario is more likely?
__ingeniare__ OP t1_izuhuhb wrote
Well yes, but it's a bit more nuanced. What I'm saying is that the regular "takeoff" scenario won't happen like that. We won't reach a point where we have human level AI that then develops into an ASI. We will simply arrive at ASI simultaneously. The reason is that AI development will progress as a continuos widening of narrow superintelligence, rather than some kind of intelligence progression across the board.
Gimbloy t1_izupp9n wrote
At some point in that gradual progression AI must reach a level that is equivalent to a human though right? Or do you think it just skips a few steps and goes straight to ASI?
gamernato t1_izv7n50 wrote
The argument he's making is that the amount of time for an AGI to develop into ASI is neglegible in the scheme of things rather than having AGI and some years/decades/centuries later developing ASI.
__ingeniare__ OP t1_izw9dl4 wrote
It won't ever be equivalent to a human across the board, it will be simultaneously superhuman in some domains and sub human in others and eventually it will simply be superhuman. It would be human level at some point in a narrow domain, but if we look at current progress it seems to reach superhuman levels in these separate domains long before we reach AGI. So, when these domains are fused into a single AI that can do everything a human can, it will also be superhuman at those things.
TopicRepulsive7936 t1_izue9be wrote
Nothing to do with likelihood, this is pure logic from definitions.
Viewing a single comment thread. View all comments