Submitted by izumi3682 t3_xxelcu in Futurology
dmun t1_irbwrmm wrote
bk15dcx t1_irby44s wrote
These examples draw their current bias from human bias.
Future AI bias should be aware of conclusions based on it's own self introspection rather than a conglomerate of human bias frequency.
dmun t1_irbyajx wrote
> These examples draw their current bias from human bias.
Yes. That's the point. A.I. are programmed by humans. A.I. are just hyped up decision making algorithms. You seem to be mistaking them for magic.
bk15dcx t1_irbys0r wrote
Not at all... But given the charts I see in this book I have by Ray Kurzweil, AI will surpass human intelligence, and future algorithms will not be based on human decision making, but purely in the AI.
dmun t1_irbzgwh wrote
> but purely in the AI.
Which is actually worse and, indeed, makes the argument that we definitely need an A.I. Bill of Rights to protect humans.
The base assumption here, that I'm reading from you, is that morality and intelligence go hand in hand.
Human morality (the "evils" you refer to) are based on human empathy, human philosophically "inherent value" and the human experience.
An intelligence without any of those, nor even the basic nerve-inputs of the physical reality of inhabiting a body, is Blue and Orange Morality at best and complete, perhaps nihilistic, metaphysical solipsism at worse.
Both are a horror.
Viewing a single comment thread. View all comments