Viewing a single comment thread. View all comments

AsheyDS t1_j5mtpp0 wrote

Reply to comment by iiioiia in Steelmanning AI pessimists. by atomsinmove

Can you give an example?

1

iiioiia t1_j5mzfu0 wrote

Most of our rules and conventions are extremely arbitrary, highly suboptimal, and maintained via cultural conditioning.

1

AsheyDS t1_j5n7s65 wrote

The guard would be a compartmentalized hybridization of the overall AGI system, so it too would have a generalized understanding of what bad undesirable things are, even according to our arbitrary framework of cultural conditioning. So could undesirable ideas leak out? Well, no not really. Not if the guard and other safety components are working as intended, AND if the guard is programmed with enough explicit rules and conditions and enough examples to effectively extrapolate from (meaning not every case needs to be accounted for if patterns can be derived).

2

iiioiia t1_j5nafg6 wrote

How do you handle risk that emerges years after something becomes well known and popular? Let's say it produces an idea that starts out safe but then mutates? Or, a person merges two objectively safe (on their own) AGI-produced ideas, producing a dangerous one (that could not have been achieved without AI/AGI)?

I dunno, I have the feeling there's a lot of unknown unknowns and likely some (yet to be discovered) incorrect "knowns" floating out there.

1

AsheyDS t1_j5njw0c wrote

>a person merges two objectively safe (on their own) AGI-produced ideas

Well that's kind of the real problem isn't it? A person, or people, and their misuse or misinterpretation or whatever mistake they're making. You're talking societal problems that no one company is going to be able to solve. They can only anticipate what they can, hope the AGI anticipates the rest, and future problems can be tackled as they come.

1

iiioiia t1_j5o1g81 wrote

This is true even without AI, and it seems we weren't ready (climate change) even for the technology we developed so far.

1