Submitted by RamaSchneider t3_10u9wyn in Futurology
hesiod2 t1_j7feg3e wrote
Reply to comment by Sirisian in What happens when the AI machine decides what you should know? by RamaSchneider
This can be solved by having a default setting which the user can override. For example by default google hides sexual materials but that setting can easily be changed by the user. Adults then make their own decisions on setting and decide for their children what they want them to see.
According to Sam Altman: “we are working to improve the default settings to be more neutral, and also to empower users to get our systems to behave in accordance with their individual preferences within broad bounds. this is harder than it sounds and will take us some time to get right.”
Source: https://twitter.com/sama/status/1620927984797638656?s=46&t=iyZErcajcVCp5w0iAm_08A
orincoro t1_j7fr90k wrote
Those settings are also driven by machine learning. You’re thinking in a linear way, but neural networks don’t work like that.
All of this is nonsensical. Altman has to define what is “neutral.” But this is an orthogonal value; not an objective characteristic. What’s neutral to you isn’t neutral to me. The bloody minded technocracy of these companies is utterly fucking maddening. They’ll replace human driven decision making and the definition of mortality and ethics themselves in the hands of programs. And believe me: the people who will benefit are the people who own and control those programs.
Viewing a single comment thread. View all comments