w1n5t0nM1k3y t1_jcjwlyw wrote
>The FTC is “seeking information on how these companies scrutinize and restrict paid commercial advertising that is deceptive or exposes consumers to fraudulent health-care products, financial scams, counterfeit and fake goods, or other fraud.”
Spoiler, these companies aren't doing anything to stop scam ads
Unusual-Chemical5828 t1_jcm4eqv wrote
They’re accepting all scam ads, good on the ftc, may not be enough.
YouTube is blatantly allowing Mr beast impersonation offering free cash for anyone clicking on this video ad. Reporting it dose absolutely nothing, they even say they can’t tell you if the ad violated the rules and they took action. You know they don’t take action because the video appears again and again for months.
jhachko t1_jclcbwz wrote
What's with all the idiots downvoting the factual replies? Good god Redditors are a hive minded bunch.
PablosDiscobar t1_jcmvnhp wrote
Insane. Each platform has ginormous teams only dealing with ads moderation and ad policy. Endless meetings about what signals to use to detect fraud, counterfeits, etc. I know because I’ve worked on the issues. It’s not as clearcut as one would think to try to establish moderation policies that can be applied universally without blocking bona fide advertisers.
almisami t1_jcnbe9y wrote
Redditors, or sock puppets from said companies doing damage control?
Twisted_Apple20 t1_jcmt81g wrote
You have to provide evidence before you call something "factual"
jhachko t1_jcrg3r0 wrote
Work in the industry. Know it to be true
nicuramar t1_jconsg9 wrote
It’s not actually a factual reply, though. Especially not without provided evidence.
jhachko t1_jcrg0up wrote
I work in the ad industry. Believe me or don't...idc
Kemizon t1_jcnbmig wrote
Just like Rupert Murdock said recently. It's not about blue or red. It's about the green. Scam advertisers still have to pay money.
jhachko t1_jckbayt wrote
Spoiler, they do, but the sheer volume, and cloaking techniques allow stuff to slip through the cracks. And, some companies are more willing to turn a blind eye than others
w1n5t0nM1k3y t1_jckbyfg wrote
What do you mean by cloaking techniques? I'd like more information on that?
Do they have humans reviewing each and every ad before it is shown to users?
jhachko t1_jclc2au wrote
There's huge ad approval teams that this stuff goes to review with...often offshore, with these companies also now relying on image recognition software...there are ways that can show the reviewers one image, and serve a different image once in distribution in the ad networks. Search for Google cloaking, Facebook cloaking, etc....you'll see listings for it. I know a guy who did that stuff. Too complex for me, but it exists.
w1n5t0nM1k3y t1_jcle3x7 wrote
An easy way to get rid of that is to have the images served off Facebooks/Googles servers rather than let the advertiser host it themselves.
What they are doing is the equivalent of selling someone a TV ad spot, but having no control what content is actually shown during the time slot. Have the validated ad shown from the Facebook servers and there's no ability for it to be changed later.
[deleted] t1_jckcwo0 wrote
...that's a rabbit hole you don't want to get into, but yes.
When videos are flagged for harmful content, it's generally some third-worlder being forced to sit through videos of extreme violence for hours on end.
Social medial companies literally export emotional labor onto the third world so that first world children aren't exposed to beheadings on YouTube and Facebook.
w1n5t0nM1k3y t1_jckd762 wrote
This is about ads, not about content posted by users. They are taking money for ads, and should have someone reviewing the ads for harmful content or just general scamminess before allowing them to be posted.
[deleted] t1_jcmv223 wrote
[deleted]
TommyHamburger t1_jckrzf2 wrote
Social media content reviewers are not just in third world countries. Read any article about the "nightmare" job experience - they're pretty much worldwide.
Viewing a single comment thread. View all comments