Viewing a single comment thread. View all comments

AndromedaAnimated t1_j3ithan wrote

Reply to comment by turnip_burrito in Organic AI by Dramatic-Economy3399

So the world would… basically stay AS IT IS? 🤣🤣🤣

0

turnip_burrito t1_j3itwno wrote

No, my point is that because people act like this now, they'd be even more empowered with personal AGI if it takes any instruction from them. It would become more extreme. It would be absurd.

1

AndromedaAnimated t1_j3iyz96 wrote

But the one big central AI would take instructions too. From those who own it.

1

turnip_burrito t1_j3izql7 wrote

Yes, ensuring the developers are moral is also a problem.

2

AndromedaAnimated t1_j3j0yik wrote

The developers will not be the owners tho…

1

turnip_burrito t1_j3j2muo wrote

Okay, seems complex and dependent on whether the developers or owners have the final say. But replace owners with developers then in my statement.

1

AndromedaAnimated t1_j3j2wxf wrote

Then the statement is correct.

The problem I see here is that a single human or a small group of humans cannot know right from wrong (unless she/he is Jesus Christ maybe - and I am not Christian, I just see that long-dead guy as a pretty good person) perfectly.

1

turnip_burrito t1_j3j3kfe wrote

I don't think we will have what everyone can call a "perfect outcome" no matter what we choose. I also don't believe right or wrong are absolute across people. I'm interested in finding a "good enough" solution that works most of the time, on average.

2