Viewing a single comment thread. View all comments

Cheapskate-DM t1_irsdz90 wrote

The key difference here is refinement.

For example, let's take policing. There's a well-known problem of departments actively screening out people who are too smart, because they don't want to invest in field/street training for someone who's smart enough to go for a promotion to detective.

Sustaining that currently requires buy-in at a cultural level. However, with AI tools, you may need only one inserted bias to make everyone else go along with a "sorry, your compatibility score says its a no".

Apply the same logic to other fields - screening for people who sound just smart enough for the job, but not smart enough to unionize or report problems to HR/OSHA.

3

NotSoSalty t1_irtk33x wrote

Increased efficiency doesn't suddenly make this system wrong, if anything it should be more ethical than what we get now.

That one arbitrary bias is already in play, it's just in the hands of an unscrupulous human instead of an unscrupulous but fair AI.

1