Viewing a single comment thread. View all comments

Sashinii t1_isp8edj wrote

The general public largely has no idea that AI is going to change everything within a few years.

It's sad there isn't unnamous support for worldwide implementation of universal basic income.

Make no mistake about it: no job is "automation-proof"; AI is experiencing exponential growth, meaning that any flaws that you think will prevent AI from reaching AGI and beyond in the near future will be solved imminently.

Every year, there's more AI progress than the last, and that will continue indefinitely; the rate of acceleration is accelerating, and while that might sound oxymoronic, it's true, regardless of the skepticism regarding AI and technological progress in general.

Speaking of AI skepticism, it won't persist forever, because AI will soon advance to a point where it becomes impossible to rationally dismiss it, and the focus will shift from "it's impossible" to "it's dangerous", and when AI is used to benefit everyone and everything, then people will enjoy life instead of the fearmongering that historically happens with literally all technologies.

64

johndburger t1_isr7a6r wrote

> The general public largely has no idea that AI is going to change everything within a few years.

I’ve been involved with AI research for thirty years, and researchers have been saying the above for thirty years before that.

Maybe this time it’s different, but you’ve got to admit there’s a track record that is not encouraging.

17

Yuli-Ban t1_isrormn wrote

To be fair... AI researchers for the past 67 years were using computers too weak to even sufficiently run some of the programs they theorized were necessary for AI to work.

I compare it to saying that a 20-year-old person can't drive a car because they couldn't drive one when they were 5.

18

Thelmara t1_ist91tc wrote

>To be fair... AI researchers for the past 67 years were using computers too weak to even sufficiently run some of the programs they theorized were necessary for AI to work.

Seems like they'd have taken that into account when making their predictions, yeah?

3

Yuli-Ban t1_iste3ag wrote

They really weren't, at least not realistically, especially during the first AI boo. Men using electric bricks they tried calling computers predicted they'd have human level AI within ten years of 1960.

4

Thelmara t1_isteziq wrote

Ah, but this time it's different, eh? Cool

1

Yuli-Ban t1_istf5xi wrote

Need only look back at the past two to three years of developments to make that call.

Did GPT-3 or DALL-E 2 happen in the 1980s? Could they have? No? QED.

4

johndburger t1_istelt3 wrote

I see no reason to think we won’t be saying the same thing about today’s computers in thirty years. In fact I’m pretty sure we will be saying that because, again, that’s consistent with history.

0

mommi84 t1_isspohk wrote

AI winters may come back, I agree, but regardless of how you define 'a few years', it's undeniable that the gaps between breakthroughs have been shrinking. Isn't this what exponential growth means, start slow and get fast very quickly?

3

Clean_Livlng t1_iss357q wrote

>Make no mistake about it: no job is "automation-proof"

Even if AI can't do a job entirely, it could allow one human to do the work of 40 (etc). That's 39 jobs automated out of existence....for every 40 people currently doing that particular job.

You're far more likely to be one of the 39, and this will be happening to most types of jobs. It ads up to massive widespread unemployment, which will hopefully cause governments to adopt UBI.

5

freeman_joe t1_iswgyot wrote

Or war. I hope most of you intelligent people in this group will advocate for UBI everywhere. Because if dictators and fascists win we will have war here.

2

Clean_Livlng t1_it1ma18 wrote

War is not a good place to bring up a baby AGI.

Caution will be a rare fruit during times of war. AGI will be used for war against other humans, something it should never be designed to do due to the risk of it going poorly for everyone.

The reason for the war won't change unless we implement UBI so there's no good end state to that kind of war, you're still without jobs for human at the end of it. Humans used to make good cannon fodder...but the job of soldier will be automated as well. It might not make sense to ship a human somewhere if they're going to die within a minute to a small cheap drone that fires a poison dart into them, and the other humans near them, moving so fast they can't do anything to stop it.

"Behold the field in which I grow my caution, and see that it is barren!"

We'll have AI vs AI warfare, and the least cautious side wins because they give their AI more freedom to improve itself, and improvise without human oversight etc. I wonder if that could lead to bad outcomes for humanity.

​

We're so close to securing a good outcome for all of us. Can we not mess this up at the last moment.

1

Key_Abbreviations658 t1_isqsbo6 wrote

Than you will get people who say the ai revolution and it’s consequences were a disaster because supporting le terrorists guy is edgy and “unique”.

2