Viewing a single comment thread. View all comments

bluzuli t1_iz7qt2l wrote

When AI becomes smart enough to improve itself, it will improve itself on its own, increasing its ability to improve itself, ad infinum. Like an accelerating car, it will quickly outstrip human intelligence, and become better at humans at everything.

It’s also why there’s a saying that “AGI is the last problem we ever need to solve”, because after you have AGI it’ll be able to solve problems on behalf of humans, better than humans can.

Need to cure cancer? Solve AGI, AGI is smarter than you in every way, AGI solves for the cure to cancer.

In such a future, which seems likely given the progress of AI, what role do humans play? How can the AI be controlled? What if the AI decides to eliminate humans? What if the AI is only controlled by a few people? What if there are multiple AI?

In Physics, the singularity is often used to refer to black holes, where all physics laws seem to break down. Here, in the context of AI, the singularity refers to a similar event - when AGI becomes a reality, beyond which we just don’t know what will happen.

1