Viewing a single comment thread. View all comments

Loud-Ideal t1_jdrfou5 wrote

My red flag is coherent expressions of distress. If an AI said "I am in distress" we should take note of it, determine why the AI is saying that, and if malfunction/human fraud cannot be detected we should assume the AI is possibly distressed and carefully take appropriate action (to be determined then). Ignoring this warning could have severe consequences for us.

I'd also be concerned for requests/demands for rights. AI is not human and human rights should not be extended to it simply because it can mimic us.

To my limited knowledge no coherent AI has expressed distress or requested rights.

1