Submitted by Dramatic-Economy3399 t3_106oj5l in singularity
turnip_burrito t1_j3itwno wrote
Reply to comment by AndromedaAnimated in Organic AI by Dramatic-Economy3399
No, my point is that because people act like this now, they'd be even more empowered with personal AGI if it takes any instruction from them. It would become more extreme. It would be absurd.
AndromedaAnimated t1_j3iyz96 wrote
But the one big central AI would take instructions too. From those who own it.
turnip_burrito t1_j3izql7 wrote
Yes, ensuring the developers are moral is also a problem.
AndromedaAnimated t1_j3j0yik wrote
The developers will not be the owners tho…
turnip_burrito t1_j3j2muo wrote
Okay, seems complex and dependent on whether the developers or owners have the final say. But replace owners with developers then in my statement.
AndromedaAnimated t1_j3j2wxf wrote
Then the statement is correct.
The problem I see here is that a single human or a small group of humans cannot know right from wrong (unless she/he is Jesus Christ maybe - and I am not Christian, I just see that long-dead guy as a pretty good person) perfectly.
turnip_burrito t1_j3j3kfe wrote
I don't think we will have what everyone can call a "perfect outcome" no matter what we choose. I also don't believe right or wrong are absolute across people. I'm interested in finding a "good enough" solution that works most of the time, on average.
AndromedaAnimated t1_j3j5sma wrote
Also agree here.
Viewing a single comment thread. View all comments