Submitted by OneRedditAccount2000 t3_xx0ieo in singularity
Stippes t1_ira5ehc wrote
I think it doesn't have to end in open conflict. There might be a Nash equilibrium outside of this. Maybe something akin to MAD or so. If an AI is about to go rogue in order to protect itself, it has to consider the possibility that it will be destroyed in the process. Therefore, preventing conflict might maximize its survival chances. Also, what if a solar storm hits earth in a vulnerable period? It might be safer to rely on organic life forms to cooperate. As an AI doesn't have agency in the sense that humans have it might see benefits in a resilient system that combines organic and synthetic intelligence.
I think an implicit assumption of yours is that humans and AI will have to be in competition. While that might be a thing for the immediate future, the long term development will be likely more one of assimilation.
Viewing a single comment thread. View all comments