Submitted by Baturinsky t3_104u1ll in MachineLearning
Baturinsky OP t1_j375886 wrote
Reply to comment by [deleted] in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Yes, exactly. Which is why it's important to not give access to dangerous things into hands of those who could misuse it with catastrophic consequences.
Duke_De_Luke t1_j376emq wrote
Emails or social networks are as dangerous as AI. They can be used for phishing or identity theft.
Not to talk about a car, or chemical compounds used to clean your home or a kitchen knife.
AI is just a buzzword. You restrict certain applications, not the buzzword. Like you restrict selling of explosives, not chemistry.
Baturinsky OP t1_j379g68 wrote
Nothing we knew yet has the danger potential of the self-learning AI.
Even though it's still a potential still.
And it's true that we should restrict only certain applications of it, but it could be a very wide list of application, with very serious measures necessary.
[deleted] t1_j375ru0 wrote
You mean like optimizing algorithms to grab people's attentions and/or feed them ads?
Cpt_shortypants t1_j38tvwy wrote
Your name is awesome by the way
Baturinsky OP t1_j37g36w wrote
As far as I see, whoever is doing it is not doing it very good. Be it AI or human.
PredictorX1 t1_j3cacld wrote
>Which is why it's important to not give access to dangerous things into hands of those who could misuse it with catastrophic consequences.
What does "give access" mean, in this context? Information on construction of learning systems is widely available. Also, who decides which people "could misuse it"? You?
Baturinsky OP t1_j3chu4b wrote
Mostly, giving a source of trained models, and denying the possibility of making the new ones. I see unrestricted use of the big scale general-purpose models as a biggest threat, as they are effectivel "encyclopedias of everything", and can be used for very diverse and unpredictable things.
Who decides is also a very interesting question. Ideally, public consensus, but realisitcally, those who have the capabilities to enforce those limitations.
Viewing a single comment thread. View all comments