os12 t1_je2hlo6 wrote
Why would we want to involve government in regulating the means of making these images? The artists are free to draw and publish what they like... so, how is this different?
NoiceMango t1_je3whwy wrote
It's different when it's meant to impersonate someone.
[deleted] t1_je5ex76 wrote
[removed]
os12 t1_je5dzmn wrote
I fail to see a point. Anyone can write prose and try to impersonate a writer. Or paint and try to impersonate a painter. Or program and try to impersonate a software firm.
None of that is regulated.
NoiceMango t1_je66d80 wrote
Impersonating someone in a very accurate way is much different. Try seeing harder.
RayTheGrey t1_je4aoua wrote
A single person could conceivably outproduce thousands of artists drawing/photoshoping images. And to verify whether something is true or not, you need people.
I'm not sure if anything can be done about it, but the sheer volume of content enabled by generative models is a little concerning.
EnsignElessar t1_je56fv1 wrote
Artists are much more expensive to hire.
os12 t1_je5dhwn wrote
Sure and why does this kind of democratization call for government regulation?
EnsignElessar t1_je5e5vc wrote
Ok so Im just a regular guy but I have two ideas for how this all ends very badly for most people. One is automated scamming. Before you needed a call center in india or somewhere which could be pretty expensive. If you wanted to scale you had to hire people which took time. But now you can just do it all on your own. The second issue is with just a prompt. "Create me a video of Biden announcing why he has just launched a tactical nuke on russia." Oh boy so even if we all just don't believe in that it would cause other issues like not believing anything you see or read... I mean you don't think these are issues?
os12 t1_je5f17b wrote
Yes, this kind of crap is the inevitable byproduct of new tech. That happens every time a new thing is created.
The good news is that companies will create similar tech to filter out the shit. Just think about scam email - this is the same thing at another level.
EnsignElessar t1_je5gm08 wrote
Oh these are just the surface level issues. The more you dig the more you realize that things are looking bad. We don't have a real plan for ai safety and it will likely end us a result. This can't be one of those things where we get it wrong and improve later. Because if we fail just once, no more people to learn from the mistake.
[deleted] t1_je5eytj wrote
[removed]
Viewing a single comment thread. View all comments