Viewing a single comment thread. View all comments

w1n5t0nM1k3y t1_jckbyfg wrote

What do you mean by cloaking techniques? I'd like more information on that?

Do they have humans reviewing each and every ad before it is shown to users?

11

jhachko t1_jclc2au wrote

There's huge ad approval teams that this stuff goes to review with...often offshore, with these companies also now relying on image recognition software...there are ways that can show the reviewers one image, and serve a different image once in distribution in the ad networks. Search for Google cloaking, Facebook cloaking, etc....you'll see listings for it. I know a guy who did that stuff. Too complex for me, but it exists.

−1

w1n5t0nM1k3y t1_jcle3x7 wrote

An easy way to get rid of that is to have the images served off Facebooks/Googles servers rather than let the advertiser host it themselves.

What they are doing is the equivalent of selling someone a TV ad spot, but having no control what content is actually shown during the time slot. Have the validated ad shown from the Facebook servers and there's no ability for it to be changed later.

0

[deleted] t1_jckcwo0 wrote

...that's a rabbit hole you don't want to get into, but yes.

When videos are flagged for harmful content, it's generally some third-worlder being forced to sit through videos of extreme violence for hours on end.

Social medial companies literally export emotional labor onto the third world so that first world children aren't exposed to beheadings on YouTube and Facebook.

−10

w1n5t0nM1k3y t1_jckd762 wrote

This is about ads, not about content posted by users. They are taking money for ads, and should have someone reviewing the ads for harmful content or just general scamminess before allowing them to be posted.

11

TommyHamburger t1_jckrzf2 wrote

Social media content reviewers are not just in third world countries. Read any article about the "nightmare" job experience - they're pretty much worldwide.

2