Omycron83

Omycron83 t1_j378v8s wrote

We don't need to regulate the research in AI in any way (as it, itself, can't really do any harm), only the applications (that often already are:). You can basically asks the question "Would you let any person, even if grossly unqualified or severely mentally unstable, do this?" Any normal application (Browsing the web, analyzing images of plants, trying to find new patterns in data, talking to people etc.) where that answer is "Yes" doesn't need any restriction whatsoever (at least not in the way you are asking). If it comes to driving a car, diagnosing patients or handling military equipment etc. you wouldn't want just ANY person to do that, which is why there are restrictions that regulate who can do it (you need a driver's license, a medical degree and license, be deemed mentally fit etc.). In these areas it is reasonable to limit the group of decision makers, and for example exclude AI. But as algorithms don't have any qualifications for that they, by default, also are not allowed to do that stuff anyways, only when someone on the government side deems it stable enough. Of course there are edge cases where AI may do stupid stuff in normal applications, but those are rare and usually only happen on a small scale (for example a delivery drone destroying someone's window or smth).

TLDR: most cases where you would want restrictions already have them in place as people aren't perfect either.

3