Submitted by a4mula t3_zsu3af in singularity
a4mula OP t1_j1anxky wrote
Reply to comment by Ok_Garden_1877 in A Plea for a Moratorium on the Training of Large Data Sets by a4mula
Again, I'm not an expert. I'm a user with very limited exposure in the grand scheme. But what I see happening goes something like this.
The machine acts as a type of echo chamber. It's not bias, it's not going to develop any strategies that could seen as harmful.
But it's goal is to process the requests of user input.
And it's very good at that. Freakishly good. Super Human good. And any goal that user has, regardless of the ethics, or morality, or merit, or cost to society.
That machine will do it's best to accomplish the goal of assisting a user in accomplishing it.
In my particular interactions with the machine, I'd often prompt it to subtly encourage me to remember facts. To think more critically. To shave bias and opinion out of my language because it creates ambiguity and hinders my interaction with the machine.
And it had no problem providing all of those abstracts to me through the use of its outputs.
The machine amplifies what we bring to it. Good or Bad.
Viewing a single comment thread. View all comments