Viewing a single comment thread. View all comments

Spziokles t1_jcdw6q7 wrote

I don't work in the field either so I just forwarded your question to Bing, lol. I thought maybe it can find key takeaways of that "Practical Guide" (see above) to answer your question:

> According to this article, creating a culture in which a data and AI ethics strategy can be successfully deployed and maintained requires educating and upskilling employees, and empowering them to raise important ethical questions. The article also suggests that the key to a successful creation of a data and AI ethics program is using the power and authority of existing infrastructure, such as a data governance board that convenes to discuss privacy1.

> In addition, a blog post on Amelia.ai suggests that an AI ethics team must effectively communicate the value a hybrid AI-human workforce to all stakeholders. The team must be persuasive, optimistic and, most importantly, driven by data2.

> Finally, an article on Salesforce.com suggests that the AI ethics team not only develops its own strategy, but adds to the wider momentum behind a better, more responsible tech industry. With AI growing rapidly across industries, understanding how the practices that develop and implement the technology come together is invaluable3.

  1. https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
  2. https://amelia.ai/blog/build-a-team-of-ai-ethics-experts/
  3. https://www.salesforce.com/news/stories/salesforce-debuts-ai-ethics-model-how-ethical-practices-further-responsible-artificial-intelligence/

> However, my main concern is whether or not AI ethics teams will be effective at helping promote ethical practices.

That surely depends on the company. Just speculating; if that team gets fired because the bosses don't like what the team (possibly for good reasons) recommends, then I don't see many ways for that team to be effective.

−1