Submitted by namey-name-name t3_11sfhzx in MachineLearning
Hydreigon92 t1_jce0yhf wrote
Reply to comment by rustlingdown in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
> Ethics teams are only useful if they are actively incorporated with and listened by engineering and business teams
I'm an ML fairness specialist who works on a responsible AI team, and in my experience, the best way to do this is to operate a fully-fledged product team whose "customers" are other teams in the company.
For example, I built an internal Python library that other teams can use to perform fairness audits of recommendation systems, so they can compute and report these fairness metrics alongside traditional rec. system performance metrics during the model training process. Now when the Digital Service Act goes into effect, and we are required to produce yearly algorithmic risk assessments of recommender systems, we already have a lot of this tech infrastructure in place.
U03B1Q t1_jcf92lp wrote
This work is exactly the kind of thing I'm interested in doing. Do you mind if I DM you for some career advice?
Hydreigon92 t1_jcfoy2u wrote
Sure! I'm always happy to chat about this.
edjez t1_jchqj0v wrote
Agree 100% that it is important to have people embedded in product teams who have accountability for it.
Ai ethics teams are also useful because they understand and keep track of the metrics and the benchmarks and methods used to evaluate biases, risks and harm. This is a super specialized area of knowledge that the whole company and community can capitalize on. It is also hard to keep it up to date- needs close ties to civic society and academic institutions, etc. . Think of it as if you have to set up a “pipeline”, a supply chain of practices, that start with real world insight and academic research and ends with actionable and implementable methods and code and tools.
In very large orgs, having specialized teams helps scale up company wide processes for incident response, policy work, etc.
You can see some of the the output of this work at Microsoft if you search for Sarah Bird’s presentations.
(cheers from another ML person who also worked w reco)
thedabking123 t1_jcfupuc wrote
thank god that only applies to giant platforms... Our firm would crumble in the face of that.
keepthepace t1_jcijjq2 wrote
> fairness metrics
Do you produce some that are differentiable ? It could be interesting to add them to a loss function
namey-name-name OP t1_jcf524y wrote
That’s really cool, it’d be awesome if something like that was built into TensorFlow or PyTorch.
Hydreigon92 t1_jcfpc49 wrote
I'm involved with the Fairlearn project, so once I figure out what's necessary from a company policy-side, my plan is to incorporate these methods into Fairlearn one day.
edjez t1_jchqnxm wrote
Awesome!
Viewing a single comment thread. View all comments