Comments

You must log in or register to comment.

Low-Restaurant3504 t1_je1yy1q wrote

Oh, the line is celebreties. People with money and influence. Gotcha. Not you or I. Our "betters". Just gonna go commission some deepfake porn of this guys mother and see how he feels then.

417

KillBoxOne t1_je2798g wrote

Regulation? How about you just don’t do it? It’s like he is saying “I did it because the government didn’t stop me”!

Edit: I get the larger need for regulation. It just funny how the guy who did it gets caught then pivots to saying more regulation is needed.

10

os12 t1_je2hlo6 wrote

Why would we want to involve government in regulating the means of making these images? The artists are free to draw and publish what they like... so, how is this different?

6

GonnaGoFar t1_je2jtr0 wrote

Honestly, at this point, it seems like deep fake porn of celebrities and regular women is inevitable.

How can we stop that? Seriously.

2

TheFriendlyArtificer t1_je2qiul wrote

How?

The neural network architectures are out in the wild. The weights are trivial to find. Generating your own just requires a ton of training data and some people to annotate. And that's assuming an unsupervised model.

I have a stripped down version of Stable Diffusion running on my home lab. It takes about 25 seconds to generate a single 512x512 image, but this is on commodity hardware with two GPUs from 2016.

If I, a conspicuously handsome DevOps nerd, can do this in a weekend and can deploy it using a single Docker command, what on earth can we do to stop scammers and pissant countries (looking at you, Russia)?

There is no regulating our way out of this. Purpose built AI processors will bring down the cost barrier even more substantially. (Though it is pretty cool to be able to run NN inferences on a processor architecture that was becoming mature when disco was still cool)

Edit: For the curious, the repo with the pre-built Docker files (not mine) is https://github.com/NickLucche/stable-diffusion-nvidia-docker

46

aflarge t1_je2sxx4 wrote

So are they gonna ban using photoshop to doctor pictures of the unconsenting? They're being sensationalist idiots.

47

MetricVeil t1_je2x69a wrote

Rule 34. Plus, fakes of celebrities has been around for almost as long as the internet has. The only difference is that the quality of fakes has improved, exponentially.

To be honest, porn is a major factor on whether some technologies are taken up by the masses. :D

15

cinemachick t1_je31zzi wrote

On the one hand, we are definitely on the edge of a world where anything can be faked. On the other hand, we've been down this road before: Photoshop, "realistic" CGI, dodging and burning pinup prints, the fairy photograph hoaxes of the early 1900s, etc. We learn and adapt to changes incrementally, not everyone and not all at once, but we get there eventually. And let's be honest, misinformation has been in place in the media for years - the sinking of the Lusitania was completely fabricated to create justification for war, way before anyone had AI or Photoshop. It all comes down to who the source is and their credibility, has been since the dawn of the written word.

(But tbf, I'm in an industry that will be hit hard by AI so I understand the panic!)

14

BroForceOne t1_je33f11 wrote

Won't someone please think of the poor celebrities?!

2

KRA2008 t1_je3ajdn wrote

i'm sure i'm not the first to say it, but i think i'm going to go ahead and study oil painting for the next 50 years, so that the neural network in my head can create images just like this. if that doesn't work out i'll use photoshop to do the exact same thing. am i illegal?

2

ozonejl t1_je3ay6c wrote

I’m in the Adobe Firefly beta and the content filters are pretty restrictive. Deleted what I thought were a couple innocuous words from my prompts AND I wouldn’t let me use “Michael Jackson.” To be fair, I was trying to make Michael Jackson at the karaoke bar with G.G. Allin, who apparently Adobe doesn’t know about.

11

ozonejl t1_je3bh7o wrote

Good to see a reasonable person who doesn’t just see a threat to their job and freaks out. New technology always comes with the same concerns and challenges. I’m kinda like…people already fall for loads of obviously, transparently fake shit on Facebook, but somehow this is gonna be so much worse?

2

Cheshire1871 t1_je3qkh9 wrote

Why? They photoshop themselves beyond recognition. Those are fake, how is this different. They are both digitally altered. So no more photoshop?

4

Biyeuy t1_je428ye wrote

Let‘s use it in Ukraine war to defeat soviet fascists. Otherwise putin takes the leader position in KI-powered warfare.

1

TiredOldLamb t1_je42itm wrote

That gif of the pope doing a trick in front of the bishops is like 10 years old at this point and it's beyond hilarious, much better than the puffy coat. But now that they are using an AI it's crossing the line?

5

Special_Function t1_je430im wrote

You could have photoshopped the pope with the same jacket and gotten the same response.

30

MensMagna t1_je439q9 wrote

How could anyone even think that image of the pope is real?

1

In-Cod-We-Thrust t1_je44jn8 wrote

Every day I plead. I beg. I raise my voice to the Gods of all the heavens; “Please… just flood it one more time.”

2

Neiko_R t1_je44v03 wrote

I don't know what people are trying to do by "regulation", these AI's are already open source and they can be used by anyone

1

MeloveTHICCbootay t1_je46ggb wrote

regulate this regulate that. How about you stop trying to have government regulate everything in our fucking lives. for fucks sake. fuck off.

1

SwagginsYolo420 t1_je46oaa wrote

The hardware is already out there though.

Also it would be a terrible idea to have an entire new emerging technology only in the hands of the wealthy. That's just asking for trouble.

It would be like saying regular hardware shouldn't be allowed to run photoshop or a spreadsheet or word processor because somebody might do something bad with it.

People are going to have to learn than images and audio and video can be faked, just like they have to learn that an email from a Nigerian price is also a fake.

There's no wishing this stuff away, the cat is already out of the bag.

10

Glittering_Power6257 t1_je49f6f wrote

As Nvidia had fairly recently learned, blockades from running certain algorithms will be circumvented. Many applications also use GPGPU for acceleration of non-graphics applications (GPUs are pretty much highly parallel supercomputers on a chip), so cutting off GPGPU is not on the table either. Unless you wish to just completely screw over the open source community, and go the white list route.

4

RayTheGrey t1_je4aoua wrote

A single person could conceivably outproduce thousands of artists drawing/photoshoping images. And to verify whether something is true or not, you need people.

I'm not sure if anything can be done about it, but the sheer volume of content enabled by generative models is a little concerning.

1

seamustheseagull t1_je4dypk wrote

It's about a decade ago now that I first recall conversations about this problem.

Back then we knew this was going to happen.

And there are many solutions to this problem which existed back then, including the use of digital signing for images and videos to verify when they were produced and by whom.

We've had at least a decade to prepare for this and nobody in the media or tech sectors have been bothered doing fucking anything.

So now we get a couple of years of pure chaos as fake images get produced which are virtually indistinguishable from reality, and everyone is scrambling to put measures in place to fix this.

1

seamustheseagull t1_je4ecal wrote

It's fairly common for someone to make a demonstration of a power in order to prove the need to regulate it.

Whether or not he did this deliberately, the fact that the image has gained so much attention has obviously made him realise the danger here and now he's using his brief new platform to try and highlight that danger. I don't see the issue.

1

G33ONER t1_je4h8lq wrote

The Pope looked Dope, has he made a statement?

1

Glader t1_je4kcma wrote

Can someone please take Gilbert Gottfrieds "The Aristocrats" clip, speech-to-text it and feed it to an artist AI?

1

The_DashPanda t1_je4r5ru wrote

In this age of hyper-consumerist "late-stage" capitalism, the head of a world religion wearing expensive elitist clothes might just be too believable for some to distinguish artistic expression from visual record, but I'm sure they'll just blame the technology and push for censorship.

1

banterism t1_je4uvgk wrote

We draw the line at puffy jackets, this is an outrage! Lol

1

Tiamatium t1_je4wk12 wrote

Yeah, it already is, has been for decades (Photoshop, ever heard of it), a d this is literally not a new problem.and we have a solution codified into laws throughout most of the world.

2

H809 t1_je4zp9t wrote

Look famous people, AI is dangerous for your imagine, we need better regulation because if it’s dangerous for your almighty individuals, it’s bad for humanity. Fucking simp.

1

NamerNotLiteral t1_je533mj wrote

I only see one way to regulate models whose weights are public already.

Licenses hard-built into the GPU itself, through driver code or whatever. Nvidia and AMD can definitely do this. When you load the model into the GPU, they could check the exact weights, and if it's a 'banned' model they could shut it down.

Most of these models are too large for individuals to train from scratch, so you'd only need to ban the weights floating around. Fine tuning isn't possible either, since you need to load the original model first before you fine-tune it.

Yes, there would be ways to circumvent this, speaking as a lifelong pirate. But it's something that could be done by Nvidia, and would immediately massively increase the barrier to entry.

2

EnsignElessar t1_je55ao5 wrote

True but this is way easier. For a long time the ability to do this has been there but its been a hard skill to learn how to do. Now just from your phone you can type in. "Pope with a weird coat." And it will be created for you. Some other things to consider is... well its text to image today sure but tomorrow it will be text to video, and then you combine that with text to audio. So now a single person not a studio can easily make a fake of anyone saying anything you like.

4

EnsignElessar t1_je55k6k wrote

Im not sure if they could even ban it at this point... its too later but something needs to be done. Otherwise our internet will be mostly bots, same deal with phone calls (probably already the case) but the scamming is about to get a whole lot more effective and scalable.

1

bobnoski t1_je55nlf wrote

the ease, speed, and accuracy of it. It's now possible to, within minutes of a live video being broadcasted. Use deep fake and AI voice generation to modify a video of a world leader. It doesn't have be something where the entire video is faked or edited, but say. edit a world leader saying "we will support Ukraine" to "we will no longer support Ukraine". Set it on blast, or in more repressive regimes run it as if that's the live view and you're going to have a way more difficult task of disproving this than an article that says "this world leader said this thing"

The more realistic, multi-faceted and abundant fakes are. the higher the chances are that people no longer trust the real thing.

1

Jman1a t1_je55p69 wrote

“I am a man of the cloth a servant of God!” sits on his solid gold throne

1

os12 t1_je5dqza wrote

It is concerning... just like a single person that is able to compile a large program, or 3D print a complex model/tool, or spin up a scalable service in AWS.

So what? None of that is regulated.

0

os12 t1_je5dzmn wrote

I fail to see a point. Anyone can write prose and try to impersonate a writer. Or paint and try to impersonate a painter. Or program and try to impersonate a software firm.

None of that is regulated.

0

EnsignElessar t1_je5e5vc wrote

Ok so Im just a regular guy but I have two ideas for how this all ends very badly for most people. One is automated scamming. Before you needed a call center in india or somewhere which could be pretty expensive. If you wanted to scale you had to hire people which took time. But now you can just do it all on your own. The second issue is with just a prompt. "Create me a video of Biden announcing why he has just launched a tactical nuke on russia." Oh boy so even if we all just don't believe in that it would cause other issues like not believing anything you see or read... I mean you don't think these are issues?

1

os12 t1_je5f17b wrote

Yes, this kind of crap is the inevitable byproduct of new tech. That happens every time a new thing is created.

The good news is that companies will create similar tech to filter out the shit. Just think about scam email - this is the same thing at another level.

1

EnsignElessar t1_je5gm08 wrote

Oh these are just the surface level issues. The more you dig the more you realize that things are looking bad. We don't have a real plan for ai safety and it will likely end us a result. This can't be one of those things where we get it wrong and improve later. Because if we fail just once, no more people to learn from the mistake.

1

Glissssy t1_je5hk3i wrote

I can't really see how that would be "the line" given this is just automated photoshopping, the kind of thing that has been a pasttime online for many years.

1

packetofforce t1_je5tg92 wrote

Even if he actually meant it(which I doubt, his brain probably just automatically said "celebrities" due to context of the situation), It is way easier to make deepfakes with celebrities, than with average people, because celebrities have way more available data(photo, video, audio) on them than average people. It makes sense for the line to be celebrities, because deepfakes with average people is more technically difficult(availability of data), so chronologically hyper-real deepfakes with average people is further down the line, so by regulating at celebrities you also prevent deep fakes with average people. And wtf is your comment? The way you split hairs about his wording in such aggressive manner was weird. Try visiting a therapist.

0

almightySapling t1_je6kea4 wrote

I'm not worried about deepfake images, audio, or video.

I'm worried about deepfaked websites. I want to know that when I go to Associated Press, or Reddit, that I'm actually seeing that site with content sources from the appropriate avenues.

I do not want to live in a walled garden of my internet provider's AI, delivering me only the Xfinity Truth.

1

DefiantDragon t1_je74nbx wrote

Low-Restaurant3504

>Oh, the line is celebreties. People with money and influence. Gotcha. Not you or I. Our "betters". Just gonna go commission some deepfake porn of this guys mother and see how he feels then.

I'm going to make a deepfake of Jada Smith starring in GI Jane so that Will Smith will come to my house and slap me.

4

ForeignSurround7769 t1_je7ioxu wrote

Wouldn’t it be an easy fix to make laws against using AI to impersonate anyone? We should all have a right to own our face and voice. That seems simple enough to regulate as well. It won’t stop all of it but will be better than nothing.

1

Low-Restaurant3504 t1_je8a1dh wrote

Ooooh. Weaponizing mental health to win an online argument. Really not a whole lot lower you can go as a person. Hell, I find it distasteful, and if it's making me feel a bit icky, I can imagine how strongly that's gonna make others feel.

Be better. For real, man.

1

Several-Duck6956 t1_je9l7y7 wrote

You think you can get used to seeing a video of your mom’s gaping b hole being filled up by a rando? Like, ever? Or perhaps, your face being glued onto some of the sickest porn shit ever created? Is that something humans get used to? How long does it take?

2