Submitted by jormungandrsjig t3_zxxmq5 in Futurology
Comments
syl3n t1_j25b215 wrote
Right now, you are getting the neutered version of life anyway. Believe me, nothing will change in that aspect.
blacklite911 t1_j26rsu6 wrote
How to upgrade to the full version and is it available on the Pirate Bay?
CaptStrangeling t1_j27dppv wrote
I’d say it depends on the bays you’re willing to sail into, sometimes beggars can’t be choosers; any port in the storm, I ain’t judging.
Rofel_Wodring t1_j258fqn wrote
Because questioning the economic mode which gives these corporations power over our lives is way more dangerous and blasphemous than just serenely accepting the very likely possibility of the premature extinction of humanity. Doing otherwise would be SOCIALISM.
BassoeG t1_j2am6v8 wrote
It's not. It is propaganda for keeping the definition of Truth™ out of your hands and under the control of the wealthy. So their video of Saddam Hussein gloating over having done 9/11, no Saudis involved, no sirree, they're American allies, and his plan to acquire WMDs and use them in another attack on America would be 'real' and the Jeffery Epstein blackmail tapes would be 'deepfakes'.
UniversalMomentum t1_j23zblv wrote
We aren't going to get AI and there probably won't be many AIs. You are imagining everyday robotics with AI, but they will only have machine learning, not sentience.
We aren't going to put living programs in our TVs and vaccum cleaners and we don't want to enslave AI for simple labor jobs either, that's all just good programming and machine learning.
MOST of what you imagine AI doing will just be done by machine learning that has no chance of developing sentience.
YouDontKnowMyLlFE t1_j2492uy wrote
How is any of what you said a meaningful response? ML, language processing model, AI, same difference.
The scale of resources necessary to create the most powerful of these tools is only achievable by nations and their largest corporations.
Those with access to these tools will be capable of performing many feats faster and/or better than those without.
The business of ads, software development, medical care, insurance, risk analysis, behavioral modification, logistics, investment firms, etc. are all going to go to whoever has the best tools for handfuls of sapiens to apply.
You think the illusion of choice is bad now?
You think the distribution of wealth is bad now?
I believe it’s going to get a whole lot worse if these tools exist in a watered down state for the public and a socioeconomic weapon of mass destruction for the few.
Willbilly1221 t1_j24ig9w wrote
I agree, its not the harm from AI itself directly toward humans, its the harm caused by humans who have access to AI as a tool and wield it to influence society to their world view. We are already in a position where a select few decide the rules of the game we play call society, and the select few usually win at this game. Handing over a game genie full of cheat codes to the select few that generally always win, further solidifies their position of power to stamp out any unwanted change by that group, regardless the impact it has to the rest of the world, so long as the select few are nice and comfy in their ivory towers. If you believe for a second that those with the ability to harness powerful AI for their own personal benefit would be benevolent enough to share said tool with the rest of us, I have beach front property in a land locked region to sell you for a very affordable price.
thruster_fuel69 t1_j24kt50 wrote
A select few don't decide the rules. A select few are raised above the rules, for some period in time, but the rules are made by us. It'd sad that we're collectively stupid enough to vote against our own interests, but that's humanity! Billions of stupid violent monkeys. Now with machine learning tools to make weapons with!
VisforVenom t1_j25fti1 wrote
The general population's concept of AI is also very sci-fi. I work at a bleeding edge AI company (not in the part of the company that works on the algorithms, but I do work with the actual technology relatively frequently.)
Our "robots", excluding the very basic actual physical robots, are honestly not all that intelligent. After 6 years and 3 rounds of series funding with tons of additional investment and scaling, we still rely on an Indian click farm to try to train the programs to understand very basic concepts that a child could comprehend without instruction.
r4m t1_j24xro7 wrote
It's more that an sentient AI will control will control all of them like we control our bodies. Some will be autonomous routines, also like our bodies, and some will have direct control.
NotAnAnticline t1_j27o0f5 wrote
Bro we can play Doom on our dishwashers. Of fucking course we're going to put AI in literally everything we can.
usererror99 t1_j245trx wrote
Sentience is the ability to feel and they are currently making lab-grown meat... I think people confuse sentience and conscience
glichez t1_j25smnt wrote
the only thing we need to fear is that people's fear of AI will only end up fine-tuning it's training as an adversary.
Hoff2001 t1_j26dwrx wrote
believing that redundancy is more than just redundancy
The_Observer_Effects t1_j26jcif wrote
I think I've heard that more than once . . .
unmellowfellow t1_j27arjs wrote
Honestly AI professionals are the last people I want to hear from unless they are talking about ways to limit said AI from replacing human workers. All this stands to do is hurt the poor while benefiting the rich. I have no respect for corporate sycophants that would gladly sell their fellow workers for an extra crust of bread. The article itself as an interview doesn't address the real societal impact AI and Automation have and the fact that it is deliberately ignored is pure corporate control in action. The need to limit the development and implementation of AI and automation as it affects employment and labor security is one of existential significance for any who must labor to provide for themselves.
12kdaysinthefire t1_j27jo2u wrote
This. I’m less concerned with AI chatbots being some kind of offensive echo chamber and more concerned with the possible lack of jobs available in the coming decades thanks to automation and corporate greed.
Sustain-ability t1_j294fp7 wrote
Give those AI chatbots a few years of training, and they'll undermine critical thinking, academia, education, and every job where someone has to write something. No wonder people like Musk are funding this. We'll be encouraged to outsource human creativity to a 1-answer-device.
reconditedreams t1_j27rrdz wrote
You're looking at it the wrong way. AI will ultimately lead to the downfall of capitalism. The bet thing any anti-capitalist can do to speed up the demise of capitalism and the emergence of a new system is to encourage automation and AI development.
jormungandrsjig OP t1_j22tk5e wrote
There's a lot of work to be done, and if we can somehow solve value pluralism for A.I., that would be exciting. We could think of it, AI shouldn't suggest humans do dangerous things or A.I. shouldn't generate statements that are potentially racist and sexist, or when somebody says the Holocaust never existed, A.I. shouldn't agree. But yet, there were instances such as Tay bot. So I think we have a long way to go.
treditor13 t1_j25gxnj wrote
"I need you to open the pod door HAL".
•
"......I'm afraid I can't do that Dave".
malfarcar t1_j26xnjb wrote
I don’t have time to read this article so I will just keep a constant fear of everything just to be safe
AndromedaAnimated t1_j27qpem wrote
The article addresses a lot of topics on a very superficial level.
The one thing that did appeal to me as interesting was the idea that the advent of the deadly paperclip maker could be prevented by implementing „common sense“ in AI.
My opinion on that is that this would be already be possible with a LLM, as long as further scaling of processing power can be economically justified. Common sense depends on semantics and is tied to language and verbal reasoning. AGI would not be necessary for that.
The big problems I see here is the „mesa optimiser“ and its hidden goals as well as reward hacking (my pet peeve…).
Why?
Because common sense is overridden in humans by the pursuit of reward. Humans „wirehead“ and cheat all the time, and about 99% of the population is only partly able to apply common sense in ambivalent/ambiguous situations.
FuturologyBot t1_j22wqr8 wrote
The following submission statement was provided by /u/jormungandrsjig:
There's a lot of work to be done, and if we can somehow solve value pluralism for A.I., that would be exciting. We could think of it, AI shouldn't suggest humans do dangerous things or A.I. shouldn't generate statements that are potentially racist and sexist, or when somebody says the Holocaust never existed, A.I. shouldn't agree. But yet, there were instances such as Tay bot. So I think we have a long way to go.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/zxxmq5/an_ai_pioneer_on_what_we_should_really_fear/j22tk5e/
[deleted] t1_j2460ls wrote
[removed]
LeeWizcraft t1_j24csw9 wrote
Why link things you can’t read without getting threw a pay wall?? Is this a ny times ad?
break_continue t1_j24ip8d wrote
Try this
treditor13 t1_j25hl4a wrote
Awesome!!! Thanks!!!
[deleted] t1_j26kmxh wrote
[removed]
canwereturntothe90s t1_j27fdcc wrote
The only thing we have to fear is the paywall that prevents me from reading this article 💀
[deleted] t1_j27g8h8 wrote
[removed]
Jcolebrand t1_j27h3l6 wrote
Paywalled, so ..
Does the article say "corporations and capitalism" after the headline "what we should really fear", because that's the only good answer to that topic.
louisdeer t1_j26pzy9 wrote
Can we stop talking about fear? We only care if we can and never ask if we should --- we let Congress handle the rest as always.
YouDontKnowMyLlFE t1_j23qncy wrote
How is regurgitation of nonsensical ideas more dangerous than wealthy corporations / governments having private access to the bleeding edge AI without constraints while the rest of us get the neutered version?