Viewing a single comment thread. View all comments

ItsAConspiracy t1_j9espr4 wrote

We don't know how to reliably give AI a goal at all. All the innards of the AI are a bunch of incomprehensible numbers. We don't program it, we train it, until its behavior seems to be what we want. But we never know whether it might behave differently in a different environment.

To implement something as complex as the Three Laws we'd need an entirely different kind of AI.

1