Viewing a single comment thread. View all comments

NonDescriptfAIth t1_jefz4dt wrote

I'm not concerned with AGI being unaligned with human's. Quite the opposite really. I'm worried that our instructions to an AI will not be aligned with our desired outcomes.

It will most likely be a government that finally crosses the threshold into self improving AI. Any corporation that gets close will be semi-nationalised such their controls become replaced with the government that helped fund it.

I'm worried about humans telling the AI to do something horrifying, not that AI will do it of it's own volition.

This isn't sci-fi and it certainly isn't computer programming either.

The only useful way to discuss this possible entity is simply as a super intelligent being, predicting it's behaviour is near impossible and the implications of this are more philosophical in nature than scientific.

1