Comments

You must log in or register to comment.

Low-Restaurant3504 t1_je1yy1q wrote

Oh, the line is celebreties. People with money and influence. Gotcha. Not you or I. Our "betters". Just gonna go commission some deepfake porn of this guys mother and see how he feels then.

417

WhatTheZuck420 t1_je3ai7m wrote

make sure she's wearing combat boots tho

57

Known2779 t1_je44q7t wrote

It’s still not too late to jump on that influencers bandwagon. Or risk seeing urself in a puffy coat on the internet.

5

DefiantDragon t1_je74nbx wrote

Low-Restaurant3504

>Oh, the line is celebreties. People with money and influence. Gotcha. Not you or I. Our "betters". Just gonna go commission some deepfake porn of this guys mother and see how he feels then.

I'm going to make a deepfake of Jada Smith starring in GI Jane so that Will Smith will come to my house and slap me.

4

packetofforce t1_je5tg92 wrote

Even if he actually meant it(which I doubt, his brain probably just automatically said "celebrities" due to context of the situation), It is way easier to make deepfakes with celebrities, than with average people, because celebrities have way more available data(photo, video, audio) on them than average people. It makes sense for the line to be celebrities, because deepfakes with average people is more technically difficult(availability of data), so chronologically hyper-real deepfakes with average people is further down the line, so by regulating at celebrities you also prevent deep fakes with average people. And wtf is your comment? The way you split hairs about his wording in such aggressive manner was weird. Try visiting a therapist.

0

Low-Restaurant3504 t1_je675u1 wrote

Please be quiet while the adults are talking. Thanks.

0

packetofforce t1_je89awc wrote

Your behavior is quite disappointing for someone who considers themselves an adult. By the way, https://bestonlinetherapyservices.com https://www.betterhelp.com/get-started/

0

Low-Restaurant3504 t1_je8a1dh wrote

Ooooh. Weaponizing mental health to win an online argument. Really not a whole lot lower you can go as a person. Hell, I find it distasteful, and if it's making me feel a bit icky, I can imagine how strongly that's gonna make others feel.

Be better. For real, man.

1

packetofforce t1_je8b9k2 wrote

Nope, I genuinely think that you need a therapist. It's not a joke. The way you got aggressive about wording was just weird. Particularly since it has come to light that you apparently are an adult.

0

Low-Restaurant3504 t1_je8bmp3 wrote

You do what you gotta do then, bud. You're a little too weird for me, however, so I'm just gonna block you and move on with my life.

Good luck with... whatever this is that you are going through.

1

Fastriverglide t1_je1xgtp wrote

Is there deepfake porn of EVERY celebrity yet?

61

Trout_Shark t1_je220fa wrote

Pretty much. At least all the current hot ones.

41

MiserableLychee t1_je3xhhc wrote

I want Alan Alda deepfakes

10

Trout_Shark t1_je4f9vp wrote

Mom? I thought you said you would stay off reddit...

5

Fastriverglide t1_je3so3m wrote

Hmm is the Pope hot to someone? Is there porn of Mohammed?

3

LiberalFartsMajor t1_je3u0o6 wrote

You just put that in the universe

9

Fastriverglide t1_je46f1d wrote

My legacy! <3

1

LiberalFartsMajor t1_je46kk6 wrote

I'm picturing the Pope and Muhammad giving each other handjobs under their robe / cloak

1

Fastriverglide t1_je46w6t wrote

That COULD be part of the interfaith dialogue. I mean who KNOWS WHAT WILL work in the end.

I'd much prefer to have them united under such circumstances rather than in the mouthfrothing hatred of atheists 🤔

4

Glader t1_je4k2kz wrote

Gilbert Gottfried? Now that he's past on and become an ex-comedian he'll never be able to make anything real.

1

aflarge t1_je2sxx4 wrote

So are they gonna ban using photoshop to doctor pictures of the unconsenting? They're being sensationalist idiots.

47

EmbarrassedHelp t1_je2tjg0 wrote

You joke, but I could see governments trying to pressure Adobe into adding AI to Photoshop that constantly scan what you are making in order to try and block things they don't like.

21

ozonejl t1_je3ay6c wrote

I’m in the Adobe Firefly beta and the content filters are pretty restrictive. Deleted what I thought were a couple innocuous words from my prompts AND I wouldn’t let me use “Michael Jackson.” To be fair, I was trying to make Michael Jackson at the karaoke bar with G.G. Allin, who apparently Adobe doesn’t know about.

11

aflarge t1_je2v8pl wrote

Seems like a sure fire way to make sure Photoshop ceases to be an industry standard.

5

BobRobot77 t1_je3puh4 wrote

Well, the line should be drawn somewhere. I think sexual content of a non-consenting non-public figure is the line.

1

Tiamatium t1_je4wk12 wrote

Yeah, it already is, has been for decades (Photoshop, ever heard of it), a d this is literally not a new problem.and we have a solution codified into laws throughout most of the world.

2

EnsignElessar t1_je55k6k wrote

Im not sure if they could even ban it at this point... its too later but something needs to be done. Otherwise our internet will be mostly bots, same deal with phone calls (probably already the case) but the scamming is about to get a whole lot more effective and scalable.

1

TheFriendlyArtificer t1_je2qiul wrote

How?

The neural network architectures are out in the wild. The weights are trivial to find. Generating your own just requires a ton of training data and some people to annotate. And that's assuming an unsupervised model.

I have a stripped down version of Stable Diffusion running on my home lab. It takes about 25 seconds to generate a single 512x512 image, but this is on commodity hardware with two GPUs from 2016.

If I, a conspicuously handsome DevOps nerd, can do this in a weekend and can deploy it using a single Docker command, what on earth can we do to stop scammers and pissant countries (looking at you, Russia)?

There is no regulating our way out of this. Purpose built AI processors will bring down the cost barrier even more substantially. (Though it is pretty cool to be able to run NN inferences on a processor architecture that was becoming mature when disco was still cool)

Edit: For the curious, the repo with the pre-built Docker files (not mine) is https://github.com/NickLucche/stable-diffusion-nvidia-docker

46

DocHoss t1_je3f5k0 wrote

You really are very handsome! And really smart too.

You want to share that Docker command for a poor, incompetent AI dabbler?

Did I mention you are very handsome and smart?

15

lucidrage t1_je3zmti wrote

What's your dockerfile setup, you incredibly handsome devops engineer? I could never get the docker container to recognize my gpu on windows...

3

NamerNotLiteral t1_je533mj wrote

I only see one way to regulate models whose weights are public already.

Licenses hard-built into the GPU itself, through driver code or whatever. Nvidia and AMD can definitely do this. When you load the model into the GPU, they could check the exact weights, and if it's a 'banned' model they could shut it down.

Most of these models are too large for individuals to train from scratch, so you'd only need to ban the weights floating around. Fine tuning isn't possible either, since you need to load the original model first before you fine-tune it.

Yes, there would be ways to circumvent this, speaking as a lifelong pirate. But it's something that could be done by Nvidia, and would immediately massively increase the barrier to entry.

2

Trip-trader t1_je3ho92 wrote

Making deepfakes is one thing, sharing them with the internet and millions of people is another. Damn straight you can regulate the crap out of anything. Go ask the EU.

0

Call-Me-Robby t1_je3y5w0 wrote

As the war on drugs showed us, there’s a very wide gap between laws and their enforcement.

11

FuckOff555555 t1_je3vsfr wrote

the easiest way would be to force nvidia, amd, intel, and apple to not allow AI training on consumer hardware

−9

SwagginsYolo420 t1_je46oaa wrote

The hardware is already out there though.

Also it would be a terrible idea to have an entire new emerging technology only in the hands of the wealthy. That's just asking for trouble.

It would be like saying regular hardware shouldn't be allowed to run photoshop or a spreadsheet or word processor because somebody might do something bad with it.

People are going to have to learn than images and audio and video can be faked, just like they have to learn that an email from a Nigerian price is also a fake.

There's no wishing this stuff away, the cat is already out of the bag.

10

Glittering_Power6257 t1_je49f6f wrote

As Nvidia had fairly recently learned, blockades from running certain algorithms will be circumvented. Many applications also use GPGPU for acceleration of non-graphics applications (GPUs are pretty much highly parallel supercomputers on a chip), so cutting off GPGPU is not on the table either. Unless you wish to just completely screw over the open source community, and go the white list route.

4

Special_Function t1_je430im wrote

You could have photoshopped the pope with the same jacket and gotten the same response.

30

EnsignElessar t1_je55ao5 wrote

True but this is way easier. For a long time the ability to do this has been there but its been a hard skill to learn how to do. Now just from your phone you can type in. "Pope with a weird coat." And it will be created for you. Some other things to consider is... well its text to image today sure but tomorrow it will be text to video, and then you combine that with text to audio. So now a single person not a studio can easily make a fake of anyone saying anything you like.

4

workworkworkworky t1_je5ox9l wrote

Well, if it gets that easy, these things will be everywhere and everyone will just get used to them.

5

Several-Duck6956 t1_je9l7y7 wrote

You think you can get used to seeing a video of your mom’s gaping b hole being filled up by a rando? Like, ever? Or perhaps, your face being glued onto some of the sickest porn shit ever created? Is that something humans get used to? How long does it take?

2

MetricVeil t1_je2x69a wrote

Rule 34. Plus, fakes of celebrities has been around for almost as long as the internet has. The only difference is that the quality of fakes has improved, exponentially.

To be honest, porn is a major factor on whether some technologies are taken up by the masses. :D

15

EnsignElessar t1_je566x6 wrote

No not the only thing that has changed. Its also very scalable. Even a single individual could launch a massive disinformation from their smartphone.

1

cinemachick t1_je31zzi wrote

On the one hand, we are definitely on the edge of a world where anything can be faked. On the other hand, we've been down this road before: Photoshop, "realistic" CGI, dodging and burning pinup prints, the fairy photograph hoaxes of the early 1900s, etc. We learn and adapt to changes incrementally, not everyone and not all at once, but we get there eventually. And let's be honest, misinformation has been in place in the media for years - the sinking of the Lusitania was completely fabricated to create justification for war, way before anyone had AI or Photoshop. It all comes down to who the source is and their credibility, has been since the dawn of the written word.

(But tbf, I'm in an industry that will be hit hard by AI so I understand the panic!)

14

ozonejl t1_je3bh7o wrote

Good to see a reasonable person who doesn’t just see a threat to their job and freaks out. New technology always comes with the same concerns and challenges. I’m kinda like…people already fall for loads of obviously, transparently fake shit on Facebook, but somehow this is gonna be so much worse?

2

RayTheGrey t1_je4adys wrote

Its the ease and speed of it that might be the difference.

6

EnsignElessar t1_je55ta5 wrote

Yes it will be worse. Because of scale. Instead of having to have an expert sit there making fakes and trying to spread them. You can automate most of that.

2

bobnoski t1_je55nlf wrote

the ease, speed, and accuracy of it. It's now possible to, within minutes of a live video being broadcasted. Use deep fake and AI voice generation to modify a video of a world leader. It doesn't have be something where the entire video is faked or edited, but say. edit a world leader saying "we will support Ukraine" to "we will no longer support Ukraine". Set it on blast, or in more repressive regimes run it as if that's the live view and you're going to have a way more difficult task of disproving this than an article that says "this world leader said this thing"

The more realistic, multi-faceted and abundant fakes are. the higher the chances are that people no longer trust the real thing.

1

almightySapling t1_je6kea4 wrote

I'm not worried about deepfake images, audio, or video.

I'm worried about deepfaked websites. I want to know that when I go to Associated Press, or Reddit, that I'm actually seeing that site with content sources from the appropriate avenues.

I do not want to live in a walled garden of my internet provider's AI, delivering me only the Xfinity Truth.

1

rsta223 t1_je8d9kn wrote

>the sinking of the Lusitania was completely fabricated

No, it was a real ship that was genuinely sunk by an actual German U-boat.

1

KillBoxOne t1_je2798g wrote

Regulation? How about you just don’t do it? It’s like he is saying “I did it because the government didn’t stop me”!

Edit: I get the larger need for regulation. It just funny how the guy who did it gets caught then pivots to saying more regulation is needed.

10

Better_Path5755 t1_je2qb2v wrote

the cat's outta the bag, morality is mostly a human construct, if someone can do something whether its right or wrong then best believe they will. i'm with you though as an artist

6

MetricVeil t1_je2wiu5 wrote

Yeah, that approach has really worked for robbery, murder, hacking... and all the other things people shouldn't do. :D

1

seamustheseagull t1_je4ecal wrote

It's fairly common for someone to make a demonstration of a power in order to prove the need to regulate it.

Whether or not he did this deliberately, the fact that the image has gained so much attention has obviously made him realise the danger here and now he's using his brief new platform to try and highlight that danger. I don't see the issue.

1

EnsignElessar t1_je56b7r wrote

Ok so I won't use it then I just sit here and hope no one else does?

1

os12 t1_je2hlo6 wrote

Why would we want to involve government in regulating the means of making these images? The artists are free to draw and publish what they like... so, how is this different?

6

NoiceMango t1_je3whwy wrote

It's different when it's meant to impersonate someone.

2

os12 t1_je5dzmn wrote

I fail to see a point. Anyone can write prose and try to impersonate a writer. Or paint and try to impersonate a painter. Or program and try to impersonate a software firm.

None of that is regulated.

0

NoiceMango t1_je66d80 wrote

Impersonating someone in a very accurate way is much different. Try seeing harder.

2

RayTheGrey t1_je4aoua wrote

A single person could conceivably outproduce thousands of artists drawing/photoshoping images. And to verify whether something is true or not, you need people.

I'm not sure if anything can be done about it, but the sheer volume of content enabled by generative models is a little concerning.

1

os12 t1_je5dqza wrote

It is concerning... just like a single person that is able to compile a large program, or 3D print a complex model/tool, or spin up a scalable service in AWS.

So what? None of that is regulated.

0

EnsignElessar t1_je56fv1 wrote

Artists are much more expensive to hire.

1

os12 t1_je5dhwn wrote

Sure and why does this kind of democratization call for government regulation?

1

EnsignElessar t1_je5e5vc wrote

Ok so Im just a regular guy but I have two ideas for how this all ends very badly for most people. One is automated scamming. Before you needed a call center in india or somewhere which could be pretty expensive. If you wanted to scale you had to hire people which took time. But now you can just do it all on your own. The second issue is with just a prompt. "Create me a video of Biden announcing why he has just launched a tactical nuke on russia." Oh boy so even if we all just don't believe in that it would cause other issues like not believing anything you see or read... I mean you don't think these are issues?

1

os12 t1_je5f17b wrote

Yes, this kind of crap is the inevitable byproduct of new tech. That happens every time a new thing is created.

The good news is that companies will create similar tech to filter out the shit. Just think about scam email - this is the same thing at another level.

1

EnsignElessar t1_je5gm08 wrote

Oh these are just the surface level issues. The more you dig the more you realize that things are looking bad. We don't have a real plan for ai safety and it will likely end us a result. This can't be one of those things where we get it wrong and improve later. Because if we fail just once, no more people to learn from the mistake.

1

TiredOldLamb t1_je42itm wrote

That gif of the pope doing a trick in front of the bishops is like 10 years old at this point and it's beyond hilarious, much better than the puffy coat. But now that they are using an AI it's crossing the line?

5

EnsignElessar t1_je561dt wrote

Its super easy to use. Even a child can type: "Pope in funny jacket." Then you can use other ai solutions to further spread disinformation.

1

Cheshire1871 t1_je3qkh9 wrote

Why? They photoshop themselves beyond recognition. Those are fake, how is this different. They are both digitally altered. So no more photoshop?

4

GonnaGoFar t1_je2jtr0 wrote

Honestly, at this point, it seems like deep fake porn of celebrities and regular women is inevitable.

How can we stop that? Seriously.

2

BroForceOne t1_je33f11 wrote

Won't someone please think of the poor celebrities?!

2

KRA2008 t1_je3ajdn wrote

i'm sure i'm not the first to say it, but i think i'm going to go ahead and study oil painting for the next 50 years, so that the neural network in my head can create images just like this. if that doesn't work out i'll use photoshop to do the exact same thing. am i illegal?

2

In-Cod-We-Thrust t1_je44jn8 wrote

Every day I plead. I beg. I raise my voice to the Gods of all the heavens; “Please… just flood it one more time.”

2

AdGiers t1_je48msx wrote

Regulate what exactly?

2

EnsignElessar t1_je57unx wrote

Ai development.

1

AdGiers t1_je5g27k wrote

How exactly, bearing in mind the powers at be can barely regulate anything digital such as piracy and crypto?

2

EnsignElessar t1_je5gak9 wrote

I actually don't think they will be willing/ able to do it. Its a hard problem to solve. But I never give up so I am asking anyway.

1

Biyeuy t1_je428ye wrote

Let‘s use it in Ukraine war to defeat soviet fascists. Otherwise putin takes the leader position in KI-powered warfare.

1

MensMagna t1_je439q9 wrote

How could anyone even think that image of the pope is real?

1

Neiko_R t1_je44v03 wrote

I don't know what people are trying to do by "regulation", these AI's are already open source and they can be used by anyone

1

MeloveTHICCbootay t1_je46ggb wrote

regulate this regulate that. How about you stop trying to have government regulate everything in our fucking lives. for fucks sake. fuck off.

1

seamustheseagull t1_je4dypk wrote

It's about a decade ago now that I first recall conversations about this problem.

Back then we knew this was going to happen.

And there are many solutions to this problem which existed back then, including the use of digital signing for images and videos to verify when they were produced and by whom.

We've had at least a decade to prepare for this and nobody in the media or tech sectors have been bothered doing fucking anything.

So now we get a couple of years of pure chaos as fake images get produced which are virtually indistinguishable from reality, and everyone is scrambling to put measures in place to fix this.

1

G33ONER t1_je4h8lq wrote

The Pope looked Dope, has he made a statement?

1

Glader t1_je4kcma wrote

Can someone please take Gilbert Gottfrieds "The Aristocrats" clip, speech-to-text it and feed it to an artist AI?

1

The_DashPanda t1_je4r5ru wrote

In this age of hyper-consumerist "late-stage" capitalism, the head of a world religion wearing expensive elitist clothes might just be too believable for some to distinguish artistic expression from visual record, but I'm sure they'll just blame the technology and push for censorship.

1

banterism t1_je4uvgk wrote

We draw the line at puffy jackets, this is an outrage! Lol

1

H809 t1_je4zp9t wrote

Look famous people, AI is dangerous for your imagine, we need better regulation because if it’s dangerous for your almighty individuals, it’s bad for humanity. Fucking simp.

1

Jman1a t1_je55p69 wrote

“I am a man of the cloth a servant of God!” sits on his solid gold throne

1

Glissssy t1_je5hk3i wrote

I can't really see how that would be "the line" given this is just automated photoshopping, the kind of thing that has been a pasttime online for many years.

1

ForeignSurround7769 t1_je7ioxu wrote

Wouldn’t it be an easy fix to make laws against using AI to impersonate anyone? We should all have a right to own our face and voice. That seems simple enough to regulate as well. It won’t stop all of it but will be better than nothing.

1

nadmaximus t1_je4870d wrote

How? Also...the pope is a celebrity?

0