The only AI that the US should be trying to make Submitted by ribblle t3_127u3wc on March 31, 2023 at 6:16 PM in singularity [removed] 6 comments 0
ribblle OP t1_jefti0b wrote on March 31, 2023 at 6:22 PM #2,542,782 Technically, silicon goku. Not saving cats from trees here, world threatening things only. −4−
Iffykindofguy t1_jefu7bp wrote on March 31, 2023 at 6:27 PM #2,542,974 Thank goodness youre not in charge 7
Surur t1_jefzgf1 wrote on March 31, 2023 at 7:02 PM #2,544,662 The logical way to prevent the creation of another AGI is to kill everyone. "Anything else is an unacceptable risk, given the buggyness of AI". 1
ribblle OP t1_jego2mx wrote on March 31, 2023 at 9:48 PM #2,553,246 Replying to Surur (#2,544,662) If you want to minimize the risk of AI, you minimize the actions of AI. This isn't actually good enough, but it's the best strategy if you're forced to make one. 1
ribblle OP t1_jego522 wrote on March 31, 2023 at 9:48 PM #2,553,279 Replying to Iffykindofguy (#2,542,974) You realize most people don't have faith in the singularity being a safe goal. 1
ribblle OP t1_jefti0b wrote
Technically, silicon goku. Not saving cats from trees here, world threatening things only.