Submitted by Baturinsky t3_104u1ll in MachineLearning
Baturinsky OP t1_j37bbwe wrote
Reply to comment by Omycron83 in [D] Is it a time to seriously regulate and restrict AI research? by Baturinsky
Imagine the following scenario. Alice has an advance AI model at home. And asks it, "find me a best way to to a certain bad thing and get away from it". Such, harming or even murdering someone. If it's a model like ChatGPT, it probably will be trained to avoid answering such questions.
But if network models are not regulated, she can find an immoral warez model without morals, or retrain the morale out of it, or pretend that she is a police officer that needs that data to solve the case. Then model gives her the usable method.
Now imagine if she asks for a method to do something way more drastic.
anon_y_mousse_1067 t1_j37dth2 wrote
If you think government regulation is going to solve an issue this, I have bad news for you about how government regulation works
Baturinsky OP t1_j37ej92 wrote
Ok, how would you suggest solving that issue then?
EmbarrassedHelp t1_j37qjz1 wrote
Dude, have you ever been to a public library before? You can literally find books on how best to kill people and get away with it, how to cook drugs, how to make explosives, and all sorts of things. Why do you want to do the digital equivalent of burning libraries?
Baturinsky OP t1_j37rkj0 wrote
Yes, but it would require a lot of time and effort. AI has already read it all and can give it an equivalent of millenias worth of human time to analyse.
Omycron83 t1_j37dxva wrote
And why did chat GPT do that? Because the data was already there on the internet, so nothing she couldn't figure out on her own here. In general there is basically no way an AI can (as of rn) think of an evil plan so ingenious no one could come up with otherwise.
Baturinsky OP t1_j37fvgz wrote
Key word here is "right now".
[deleted] t1_j37f28k wrote
[deleted]
Viewing a single comment thread. View all comments