Submitted by Gari_305 t3_y0brkr in Futurology
Cheapskate-DM t1_irsdz90 wrote
Reply to comment by YareSekiro in AI app could diagnose illnesses based on speech : NPR by Gari_305
The key difference here is refinement.
For example, let's take policing. There's a well-known problem of departments actively screening out people who are too smart, because they don't want to invest in field/street training for someone who's smart enough to go for a promotion to detective.
Sustaining that currently requires buy-in at a cultural level. However, with AI tools, you may need only one inserted bias to make everyone else go along with a "sorry, your compatibility score says its a no".
Apply the same logic to other fields - screening for people who sound just smart enough for the job, but not smart enough to unionize or report problems to HR/OSHA.
NotSoSalty t1_irtk33x wrote
Increased efficiency doesn't suddenly make this system wrong, if anything it should be more ethical than what we get now.
That one arbitrary bias is already in play, it's just in the hands of an unscrupulous human instead of an unscrupulous but fair AI.
Viewing a single comment thread. View all comments