Viewing a single comment thread. View all comments

Yesyesnaaooo t1_jdgt47b wrote

Do you think the alignment problem can be solved by making the first thing the AI trains on be books like 'The Culture' and other works of fiction?

So the AI builds it's moral base off stories of AI in the world - like human's built their moral base off of religion?

​

Just spitballing like

7