The only AI that the US should be trying to make Submitted by ribblle t3_127u3wc on March 31, 2023 at 6:16 PM in singularity [removed] 6 comments 0
Iffykindofguy t1_jefu7bp wrote on March 31, 2023 at 6:27 PM Thank goodness youre not in charge Permalink 7 ribblle OP t1_jego522 wrote on March 31, 2023 at 9:48 PM You realize most people don't have faith in the singularity being a safe goal. Permalink Parent 1
ribblle OP t1_jego522 wrote on March 31, 2023 at 9:48 PM You realize most people don't have faith in the singularity being a safe goal. Permalink Parent 1
Surur t1_jefzgf1 wrote on March 31, 2023 at 7:02 PM The logical way to prevent the creation of another AGI is to kill everyone. "Anything else is an unacceptable risk, given the buggyness of AI". Permalink 1 ribblle OP t1_jego2mx wrote on March 31, 2023 at 9:48 PM If you want to minimize the risk of AI, you minimize the actions of AI. This isn't actually good enough, but it's the best strategy if you're forced to make one. Permalink Parent 1
ribblle OP t1_jego2mx wrote on March 31, 2023 at 9:48 PM If you want to minimize the risk of AI, you minimize the actions of AI. This isn't actually good enough, but it's the best strategy if you're forced to make one. Permalink Parent 1
ribblle OP t1_jefti0b wrote on March 31, 2023 at 6:22 PM Technically, silicon goku. Not saving cats from trees here, world threatening things only. Permalink −4−
Iffykindofguy t1_jefu7bp wrote
Thank goodness youre not in charge