StarCaptain90 t1_jecgmdw wrote
This is a mistake. This would cause AI to be constrained under a limited potential causing humanity not to gain as much benefit. Instead we should focus efforts on having government restrict skynet scenarios from ever happening by creating an ai safety division with the purpose of auditing every ai company on a risk scale. The scale would factor in parameters like "can the AI get angry at humans?", "if it gets upset, what can it do to a human?", "does it have the ability to edit its own code in a manner that changes the outcome of the first 2 questions?", and lastly "Can the AI intentionally harm a human?"
Also the 3 laws of robotics must be engraved in the AI system if its an AGI
Viewing a single comment thread. View all comments