Viewing a single comment thread. View all comments

imdb_shenanigans t1_j1zsxdx wrote

You are misreading the article yourself slightly. The article says that AI needs labeled data yes and sometimes non AI work like content moderation for objectionable photos is PTSD inducing hard rote work for outsourced workers on a regular basis. If Meta outsources this to a contractor company Sama who then exploits its own local labor force to accomplish the tasks ( not even paying them regular wages), it is no different than Nike employing workers in sweat shops to build shoes or blood cobalt and diamonds.

The article discusses the politics of this. Companies hide the fact that all of these content moderation and labeling occurs by poorly paid and managed workers elsewhere and this would need better contractual oversight than just outsourcing and looking the other way which is the norm today. Your university may employ a bunch of guys on Fiverr to do it is a different issue. So here the problem is just how unethical sourcing and manufacturing is in other domains too and AI is no different. But a consumer today can ask about shoes and sweat shops, it has no clue that AI model data is another Nike shoe.

3

quantumfucker t1_j201exu wrote

This isn’t really being hidden, consumers just don’t care. The article itself cites authorities saying so.

From Facebook’s public content policy from their own website regarding their use of AI: “Sometimes, a piece of content requires further review and our AI sends it to a human review team to take a closer look. In these cases, review teams make the final decision, and our technology learns and improves from each decision. Over time—after learning from thousands of human decisions—the technology gets better.” They are not hiding the need for human labor behind AI. This is from a source in the article. This is different already than companies using sweatshops they try to hide and disavow knowledge of.

And these outsourcing companies proudly advertise big tech as their clients: https://www.sama.com/Others are using the very popular and frequently used MTurk service for this promoted publicly by Amazon, such that universities are aware of this and use it to advance academia, with these services being described in their methodology. This is all available information that’s being actively marketed.

This article’s headline and much of its content make it sound like a conspiracy specific to AI instead of “by the way, issues with the global labor markets apply to the labor behind AI too.” The transparency or lack thereof isn’t even the problem, because people don’t care. American consumers enjoy cheap products, people in foreign countries consider the American outsourcing better than jobs in their localities (and data labeling and content moderation are significant improvements to physical labor- the article itself cites someone saying as much), and the countries that accept Americans outsourcing their labor benefit politically and economically.

I don’t like exploitation and I think all content moderators should have readily available mental health access, but this is what a global liberal marketplace looks like, and I’m wary of blaming AI and the companies behind it for this instead of examining the economic systems we have that promote these issues. The technology and its needs aren’t the issue. It’s not as if big tech is marketing a camera with a child inside who quickly draws what they see and gives it to you. They need a tough job done cheaply that Americans don’t want to do. Not unlike how that’s a big reason Americans allow immigrants to come in in the first place.

Also worth noting that the author doesn’t actually have experience in technology, but is rather an artist who also writes about AI ethics. I do apply extra scrutiny to what narratives are being painted by that kind of author.

2

reconrose t1_j20d89u wrote

Just because it's publicly available information doesn't mean it isn't undisciplined in nearly every article about new ML models. I think you took the work "hidden" too literally

1

quantumfucker t1_j20f9dv wrote

When drawing a comparison to Nike and other companies with a history of literally hiding their labor abuses, what “hidden” literally means does matter in terms of accountability. There isn’t a point to mentioning what humans are doing to train new ML models in most articles. The data labeling and content moderation angles are not really relevant to what the model’s own impact and application is, and those processes really don’t change. This isn’t new information at all.

1