imakenosensetopeople t1_j8y3ia1 wrote
Reply to comment by mia_farrah in Bing's AI bot tells reporter it wants to 'be alive', 'steal nuclear codes' and create 'deadly virus' by Urgullibl
On the flip side, every time we expose some type of machine learning to the Internet, it turns into a fascist. Not saying it’s an ML problem, but perhaps we should not be exposing these things to the Internet until we figure out how to keep them from doing this.
ThePhoneBook t1_j94vbg2 wrote
That's because these machines tend to be programmed under executives who are fascist sympathisers: musk Thiel etc. We've all seen the insane demands musk makes of twitter engineers - imagine what type of parrot is demanded of the gpt models
Engineers think they're so clever and classless and free, but they're still fucking peasants following orders
Tahxeol t1_j97e5m9 wrote
You cannot dictate how those machines will learn, only what kind of data they can learn from. The moment you let internet provide the learning data, you have lost
ThePhoneBook t1_j97pxln wrote
Well exactly
[deleted] t1_j8yfwe0 wrote
[deleted]
AtLeastThisIsntImgur t1_j8yy6nb wrote
That's a lot of bad analogies for someone not defending fascism.
smashkraft t1_j8yzkdy wrote
A tangible example of an AI bot that will struggle is 100 years in the future when 90% of people are horrified by the idea of eating meat. We are already around 1/5 of the world not eating meat. This is a trend that could easily rise as a means of carbon footprint / climate change / zoonotic disease.
Who decides when the bot isn’t allowed to suggest eating red meat for an iron deficiency? Or rather can only suggest leafy greens like spinach?
Sometimes there isn’t an absolute right or wrong for preference. People should be allowed to eat meat or not, but someone will always be unhappy with either suggestion.
AtLeastThisIsntImgur t1_j8z0oix wrote
You're still using hypothetical scenarios instead of dealing with the stated issue. Veganism 100 years in the future is very different from fascism in the now.
I think you're ignoring the tolerance paradox.
smashkraft t1_j8zq74m wrote
I think things like launching a nuclear war and fascism has a lot of consensus about whether or not we want to constrain those actions. That's a boring proposition, there is no controversy other than the fact that is was proposed.
For a scenario right now, would you be willing let AI determine which books are appropriate for children instead of any/all governments? (There is no override, it is permanent & forever, we let AI control the distribution of written content worldwide and it chooses whether it incites violence, induces emotional harm, etc.)
​
I have not researched the tolerance paradox a lot, but I have some doubts that come to mind. I don't think that we will become so tolerant as a society that we begin to formally enslave and torture people again to run our industrial systems. Capitalism might have faults, but nobody is getting burned with scalding pig lard right now inside of a meat processing facility. The employees are poor and it is bad, but I think the tolerance paradox presents a very black-and-white worldview. There will be an ebb and flow of progress and regression forever. My read of the tolerance paradox is that it must return to complete intolerance given that the intolerant seize control. I would be shocked if we even regress to illegal birth control or outlawing alcohol again.
Viewing a single comment thread. View all comments