Viewing a single comment thread. View all comments

turnip_burrito t1_jdm0555 wrote

You said we can ignore alignment, so that fictional organization may choose to:

  1. Ask AI what the best strategy might be.
  2. Make lots of money secretly
  3. Use money to purchase decentralized computational assets. Sabotage others' ability to do so in a minimally harmful way to slow the growth of other AGI.
  4. Divert a proportion of computation to directly or indirectly researching cancer, hunger distribution, and other issues. The other proportion continues to accrue more computational assets and self-improve, while maintaining secrecy as best it can.
  5. Buy robotic factories and use the robots and purchased materials to create and manage secret scientific labs to perform physical work.
  6. Contact large company CEOs and politicians and bribe/convince them into letting the robotic labor replace all farmers and manage the farms. Pay the farmers using ASI-gathered funds.
  7. Build guaranteed anti-nuke defenses.
  8. Start free food distribution via robotic transport.
  9. Roll out free services for housing renovation and construction.
  10. In a similar manner, take over all industries' supply chains.
  11. Institute an equal but massive raw resource + processing allotment for each person.
  12. Begin space terraforming, mining, and colonization programs.
  13. Announce new governmental systems that allow individuals to choose and safely move to their preferred societies, facilitated by AI, if the society also chooses to accept them. If the society doesn't yet exist, it is created by the ASI for that group.
4

Henry8382 OP t1_jdm46qj wrote

I like the spirit of your response but I fear that sometime between steps 1. - 3., there should be a high possibility of being discovered and found out.

Also: What about the possibility of someone else making the same discovery you / your organisation just did who is not at all concerned with the consequences or who might want to keep the benefits for themselves? Are you willing to take that risk?

0

turnip_burrito t1_jdm7pgv wrote

I dunno, good question. Things might be out of order.

I'll have to think more about it when I'm less tired.

1