Submitted by Baturinsky t3_104u1ll in MachineLearning
bitemenow999 t1_j3784qe wrote
Dude we are not building Skynet, we are just predicting if the image is of a cat or a dog...
Also like it or not AI is almost getting monopolized by big tech given the huge models and the required resources to train the said models. It is almost impossible for a research lab (academic) to have the resources to train one of the GPTs or diffusion models or any of the STOA models (without sponsorships). Regulating it will kill the field.
Philpax t1_j37i5s5 wrote
> we are just predicting if the image is of a cat or a dog...
And there's no way automated detection of specific traits could be weaponised, right?
I generally agree that it may be too early for regulation, but that doesn't mean you can abdicate moral responsibility altogether. One should consider the societal impacts of their work. There's a reason why Joseph Redmon quit ML.
DirkHowitzer t1_j38k6b8 wrote
A tool is just that a tool. Any tool can be used for good or for evil purposes. It's hard to imagine that a well regulated AI is all that is needed to get the Chinese government to stop brutally oppressing the Uygher people. Regulate AI all you want, it won't stop nasty people from doing nasty things. It will stop bitemenow999 from making his cat dog model. It will stop a lot of very productive people from doing important and positive work with AI.
If a graduate student no longer wants to peruse ML because of his own moral code, that is his choice. There is no reason that I, or anyone else, should be regulated from doing research in this area because of someone else's hang ups.
bitemenow999 t1_j39qzwa wrote
>but that doesn't mean you can abdicate moral responsibility altogether.
if you design a car model will you take responsibility for each and every accident that happens where the car is involved irrespective of human or machine error?
​
The way I see it, I am an engineer/researcher my work is to provide the next generation of researchers with the best possible tools, what they do with the tools is up to them...
Many will disagree with my opinion here but given past research in any field if the researchers would have stopped to think about the potential bad apple cases then we would not see many of the tools/devices which we take for granted every day. Just because Redmond quit ML doesn't mean everyone should follow in his footsteps. Restricting research in ML (if something like this is even possible) would be similar to proverbial book burning...
THENOICESTGUY t1_j3bse8i wrote
I agree with you, scientists/engineers and the like's goals is to produce tools/discoverys whether or not it can be used for someone's benefit or harm, what someone does with what they found or created isn't there concern it's the person who's using it that is of concern
Baturinsky OP t1_j3bx0gj wrote
I understand the sentiment, but I think it's irresponsible. Possible bad consequences of AI misuse is worse by far than enything other research before. It's not a reason to stop them, but a reason to treat them with extreme care.
Blasket_Basket t1_j3h9l5p wrote
Got anything solid to back that claim up that isn't just vague handwavy concerns about a "superintelligence" or AGI? You're acting as if what you're saying is fact when it's clearly just an opinion.
Baturinsky OP t1_j3hmxy6 wrote
ChatGPT may be not on the level of AGI yet (even though some think it is -
https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that)
But the preogress of AI training does not show signs of slowing down, and there is a very big possibility that we will reach it soon.
Also, even without being AGI, AI can be extremely dangerous.
[deleted] t1_j3h8xyu wrote
[removed]
Baturinsky OP t1_j379whv wrote
Yes, it's kinda self-limit itself by the costs of training now. But I think it's inevitable that there will be more efficient training algorithms soon, possibly by orders of magnitude. Probably found with the help of ML, as AI now can be trained for programming and research too.
Viewing a single comment thread. View all comments