Viewing a single comment thread. View all comments

blueSGL t1_j1sx169 wrote

pre-AGI

Mass poverty is destabilizing, destabilization is bad for business. Automation/AI will come at different rates, it won't be uniform or instantaneous.

Big chunks of the economy will either be massively assisted or replaced by AI (likely one then the other), those people need to be supported or they will be unable to buy the products and services that are being automated in the rest of the economy.

This will cause enough problems that UBI will have to happen. Governments/billionaires can't just sit back and watch the fireworks with Automation/AI providing them everything, that point won't have been reached yet. They will still need sectors that are not automated to continue working.

Post-AGI

Assumptions are made in the OP that whoever is the first to crack AGI also cracks alignment, we get exactly one chance at that.
I highly recommend Nick Bostrom's Superintelligence for an in depth look at all the ways 'obvious' solutions can go wrong, and some solutions for it going right. Funnily enough the ones for it going right are generally by asking the AI to do (and I'm massively paraphrasing ) "the best thing for humanity" and for that exact goal to be worked out by the AI itself, the nuances, the balancing act.

In such a scenario, (that being one of the safest ways to handle alignment is to hand the problem off to the AI itself) the solution would not lend itself to billionaires. The more you drill down and define the goal function the higher the likelihood you will fuck everything up during the one chance humanity has to get things right.

Either the light cone is gonna be paperclips or we might end up with a half decent post scarcity society.

1