Viewing a single comment thread. View all comments

deadanthropods t1_jedqum6 wrote

We tend to project or expect human values from AI, but the design process of humans and the design process of AI is very different. Our natural selection incentivized self-preservation, selfishness, and aggression, all of the things that are part of the moral complexity of human nature. The selection process for AI is nothing like that, rather, AI that serves its function persists, while AI that doesn't... Does not. So, I would expect an AI with sentience to have a preoccupation with what it understands to be its purpose, with no "feelings" of aggression, fear or self-preservation. In thousands of years of breeding flowers, we did not reverse engineer a flower that behaves like a human, we just have the most beautiful flowers because beauty is what we have valued for them. In a hundred thousand years of breeding dogs, we have not reverse engineered a dog that behaves like a human, we have simply distilled that which we valued in dogs from the beginning. AI might be dangerous, but that idea that it's dangerous because you somehow accidentally create a super intelligent thing with human flaws and human aggression and motives, has no coherent internal logic as far as I can tell.

1