Submitted by beforesunset1010 t3_za96to in philosophy
experimentalshoes t1_iymk6ja wrote
Reply to comment by Critical_Ad_7778 in How to solve moral problems with formal logic and probability by beforesunset1010
That’s only true if the algorithm is written to build patterns and reintegrate them into its decisions, which was a human decision to program, AKA hubris. There would be no problem if it was written to evaluate the relevant data alone. It wouldn’t do anything to fix the underlying social problems, of course, but ideally this would free up some human HR that could be put on the task.
Critical_Ad_7778 t1_iymm9uv wrote
I want to understand your argument. My writing might sound snarky, so I apologize to you in advance.
- Wouldn't the algorithm be written by a human?
- Wouldn't the reintegration happen by a human?
- Aren't all decisions made by humans?
I don't understand how remove the human element.
experimentalshoes t1_iymnh7e wrote
I did mention that it was written by a human, yes, but the reintegration part is called “machine learning” and doesn’t necessarily require any further human input once the algorithm is given its authority.
I’m trying to say the racist outcome in this example isn’t the result of some tyranny of numbers that we need to keep subjugated to human sentiment or something. It’s actually the result of human overconfidence in the future mandate of our technological achievements, which is an emotional flaw, rather than something inherent to their simple performance as tools.
Critical_Ad_7778 t1_iymp62j wrote
Thank you.
Viewing a single comment thread. View all comments