Viewing a single comment thread. View all comments

chute_amine t1_iu11wxz wrote

It’s complicated, but yes. We don’t use the sensitive traits in training as a normal feature - we use them to correct bias in the model along that dimension. It can be done in training or after training, but it is a necessary check in any human-influencing AI model.

0

RonPMexico t1_iu127ye wrote

It sounds like your are reducing the efficiency of the model in the name of equality.

2

chute_amine t1_iu13vjw wrote

Exactly, but what is more important? Revenue or fairness? It’s about finding the right balance. Each project/model has its own level of compromise.

1

RonPMexico t1_iu144rm wrote

I would say in the long term efficiency will benefit every one more than handicapping systems to provide desired outcomes.

2

chute_amine t1_iu16o6j wrote

Fair enough. But academia, the big names in tech, the USA, the EU, and I disagree.

2

RonPMexico t1_iu17ibt wrote

Do you mean big tech as in Facebook or Google ad services. In academia are their engineers and data scientists that prefer nice data over accurate data.

1