I make AI for a living and it would be very easy to bias the data.
Let's for instance say that I use "address" as a raw feature to give to the model. This will definitely be an important feature because education and competence are associated with where you live.
However this correlation is an artifact of other things. The AI can not tell the difference between correlation and causation. So in the example, address correlates with competence but does not cause competence. Where maybe something like the ability to solve a math problem is actual evidence of competence.
GenericHam t1_j1rjshh wrote
Reply to comment by JMAN1422 in NYC's AI bias law is delayed until April 2023, but when it comes into effect, NYC will be the first jurisdiction mandating an AI bias order in the world, revolutionizing the use of AI tools in recruiting by Background-Net-4715
I make AI for a living and it would be very easy to bias the data.
Let's for instance say that I use "address" as a raw feature to give to the model. This will definitely be an important feature because education and competence are associated with where you live.
However this correlation is an artifact of other things. The AI can not tell the difference between correlation and causation. So in the example, address correlates with competence but does not cause competence. Where maybe something like the ability to solve a math problem is actual evidence of competence.