Viewing a single comment thread. View all comments

iiioiia t1_j5nafg6 wrote

Reply to comment by AsheyDS in Steelmanning AI pessimists. by atomsinmove

How do you handle risk that emerges years after something becomes well known and popular? Let's say it produces an idea that starts out safe but then mutates? Or, a person merges two objectively safe (on their own) AGI-produced ideas, producing a dangerous one (that could not have been achieved without AI/AGI)?

I dunno, I have the feeling there's a lot of unknown unknowns and likely some (yet to be discovered) incorrect "knowns" floating out there.

1

AsheyDS t1_j5njw0c wrote

>a person merges two objectively safe (on their own) AGI-produced ideas

Well that's kind of the real problem isn't it? A person, or people, and their misuse or misinterpretation or whatever mistake they're making. You're talking societal problems that no one company is going to be able to solve. They can only anticipate what they can, hope the AGI anticipates the rest, and future problems can be tackled as they come.

1

iiioiia t1_j5o1g81 wrote

This is true even without AI, and it seems we weren't ready (climate change) even for the technology we developed so far.

1