Submitted by izumi3682 t3_11shevz in Futurology
Iwanttolink t1_jcifj9k wrote
Reply to comment by yaosio in "This Changes Everything" by Ezra Klein--The New York Times by izumi3682
> True AGI implies that it has its own wants and needs
How do you propose we ensure those wants are in line with human values? Or do you believe in some kind of nebulous more intelligence = better morality construct? Friendly reminder that we can't even ensure humans are aligned with societal values.
Viewing a single comment thread. View all comments