Submitted by ouaisouais2_2 t3_y8qysb in singularity

They (employers as well as employees) might not be paranoid enough to imagine sci-fi level edge-cases. However, it is unimagineable that they don't know their inventions will plunge billions of people into joblessness - which will inevitably result in the worst international destabilization ever seen no matter if it necessarily results in poverty or not.

So, I have two main questions

1: Why don't the people, that aren't working on this technology, not try to inhibit or manage the development? Why do people not make treaties on this, like with nuclear weapons, the ozon layer and the climate?

2: Why are the people working on it continuing? Is it just a race to win the next sack of money? Do certain employers have a vision, that they would like to impose on the world? Or are they really that ignorant?

0

Comments

You must log in or register to comment.

digitalthiccness t1_it1id4n wrote

>Is it just a race to win the next sack of money?

You cracked it. /thread

2

OLSAU t1_it1ih2x wrote

Same question, different tech: Gain of Function on vira ... WHY????

My answer: Psychopaths are firmly in positions of power, funds and influence ... They have no morals, are pathologically risk seeking and couldn't care less about humanity as a whole.

0

Gimbloy t1_it1itob wrote

There is some game theory at work here, the thinking is “If we don’t develop it our competitors will and outcompete us”.

22

Quealdlor t1_it1jg0b wrote

For example as societies get older, they need more automation. In the past there were slaves and serfs. In the 21st century we need robots and intelligent systems to take care of things. Do you want to do things by yourself for the rest of your life or do you prefer robots and computers taking care of (at least some of) them?

7

Mortal-Region t1_it1jite wrote

>...it is unimagineable that they don't know their inventions will plunge billions of people into joblessness.

They simply don't think that'll happen. And history backs them up. Technological advancement has lead to enormous reductions in poverty. Many other improvements as well.

20

tms102 t1_it1jnn0 wrote

Is this like the Luddite argument against automation? Use of AI has the potential to improve life and prosperity on average and it already has in many ways you might not be aware of.

In the future AI like Alpha fold will help find new treatments and cures for diseases for example.

12

Apollo24_ t1_it1l2l9 wrote

You're blaming the wrong people. Scientist are doing their jobs as best as they can to solve lifes problems. If by result society is faced with crisies, it's because politicians and people couldn't adapt to it.

2

beachmike t1_it1l7ie wrote

How do ***YOU*** know the consequences of developing AI could be "disastrous"? You're making huge assumptions which other people are not making. What if my company can be very profitable selling a narrow AI that helps doctors make more accurate diagnosis, for example. You're telling ***ME*** that I should not sell this product because the consequences ***MIGHT*** have "disastrous" employment consequences in the future, even though it helps people have more accurate medical diagnosis and therefore better quality lives? That's absolutely ridiculous.

5

jamesj t1_it1lz7o wrote

Joblessness is the goal humans don't exist to work.

11

ouaisouais2_2 OP t1_it1n9d4 wrote

>"They simply don't think that'll happen. And history backs them up."

We've been replacing our strength with tools, motor-skills with machines and now our brains with AI. I see no reason for there to be "jobs" in around 50 years. The only activity humans will need to do, given that they control the tools they have created, is to request their wishes, and I'm not so sure everyone will be allowed to have wishes.

>"Technological advancement has lead to enormous reductions in poverty."

I don't know what your definition of poverty is, but I have the impression that the ratio between the aristocratic 0.1%, the semi-comfortable middle-class of 9.9% and the 90% who are overexploited into misery has been the same since the dawn of civilization. We have simply been able to make more people.

These two unhealthy patterns are likely to express themselves in the singularity in morbid and unpredictable ways. That is, if they aren't reversed.

TLDR; how can so many in this subreddit be so nauseatingly positive about high-technology? Excuse the harsh words but that's what I think.

−13

socialkaosx t1_it1nlsa wrote

simple - for money

Governance: to rule over others. Whoever invents ai first will rule the world (as strange as it may sound :)

2

daltonoreo t1_it1nzwv wrote

Here is how a company thinks

(Will it make a profit)

If yes -> Do it

Else -> Don't do it

1

ouaisouais2_2 OP t1_it1o7kq wrote

yes, I am proposing that the ***might*** is more important than ***you***, because the ***might*** is absolutely ridiculously more dangerous than the current diseases.

It is a question of time before AI allows for the wildest forms of biological terrorism, which a company couldn't predict. The individual developer isn't necessarily a "bad person" but we should collectively decide to halt the advancements and subject them to collective ethical considerations.

Edit: It is important to note that I don't blame you personally if you have happen to run an AI enterprise. The problems are always systemic. I just wanted to know your motivations.

−1

digitalthiccness t1_it1obzz wrote

>how can so many in this subreddit be so nauseatingly positive about high-technology? Excuse the harsh words but that's what I think.

You do know where you are, right? Most people interested in the Singularity just want to be raptured by benevolent AI gods into eternal virtual heaven. They're not here because they think we're going to get turned into paperclips, they're here because Ray Kurzweil told them Skynet's gonna give them their dead relatives back.

4

hagaiak t1_it1oio0 wrote

Why do companies do anything? The answer is obvious

1

Denpol88 t1_it1p5mp wrote

Because if they don't do it they know others will do it.

1

CommentBot01 t1_it1pgaw wrote

Because we have tons of big data increasing exponentially and no human can interpret them. If you don't want to stop the civilization running, we need AI desperately.

1

ouaisouais2_2 OP t1_it1q0dm wrote

>Do you want to do things by yourself for the rest of your life or do you prefer robots and computers taking care of (at least some of) them?

No, I don't, but I wish we'd have a have more democratic ethical consideration when going into these things, so that we don't pull a black ball.

Also I think slaves and serfs are mostly needed to keep an empire together in times of war. If we stop wars and the worst forms of economic exploitation, we might all be able to work without slave-like conditions. With lives like that people will have
more time to be consider the changes they make to society.

2

Background-Loan681 t1_it1qjz5 wrote

Because Companies That Care About Those Things,
Will Be Beaten To The Ground By Those Who Don't

You want to be rich? Forget morals, forget ethics, you want to look at the market, the trend, and the paying customers.

It's people like Thomas Edison that shaped the market, not geniuses like Nicola Tesla.

You want to become the largest Food and Beverage Company in the world? Then starve the people. Nestle knows what they were doing.

In any case... It's kind of the world we live in. The rich don't pay taxes, the poor will always suffer, everyone suffers, big sad, have a cup of coffee, and let's back to work.

...

As for why people outside these companies don't make treaties for things like this? I suppose it's because AI's doesn't directly harm people. Even if 90% of artists loses their job, people would just look at them, and think 'yeah, it's the invention of email all over again'.

People's job gets replaced by automation every time that everyone doesn't give a damn until it threatens their career. That's just how it is now. The same way that people are actively supporting driverless cars, disregarding the bus drivers and truck drivers.

4

hunt_and_peck t1_it1slxx wrote

Because if they don't Roko's Basilisk will come after them.

1

Rogue_Moon_Boy t1_it1tt2a wrote

>I have the impression that the ratio between the aristocratic 0.1%, the semi-comfortable middle-class of 9.9% and the 90% who are overexploited into misery has been the same since the dawn of civilization. We have simply been able to make more people.

You might want to look into how people lived 60/70/100 years ago. All the money in the world couldn't buy you the luxury even lower class people take for granted nowadays.

I know most of Reddit is all doom and gloom, because doom and gloom is what generates clicks. Reality is, we live in the best times ever for human beings if you look at the big picture. We are currently in a recession, this is temporary and not the end of the world.

>how can so many in this subreddit be so nauseatingly positive about high-technology?

Because it absolutely is a net positive looking at it objectively. Living conditions vastly improved basically everywhere. Poverty is at an all time low and falling, education levels shot up, medical treatments are better than ever which resulted in way longer life expectancy. We have the least amount of war ever in the history. Thanks to the internet literally everyone has a voice heard by thousands and millions, thanks to the internet education is basically free and you have access to all of human knowledge at your finger tips and in seconds.

Misery is just vastly overreported, because again, it generates more clicks.

Edit:

Nobody knows how the singularity will turn out, but according to history, better technology has always turned out positive for us humans in the big picture, even given short term drawbacks. Doom and gloom Terminator and Skynet stories are just Sci-Fi.

10

ouaisouais2_2 OP t1_it1yfud wrote

By "high-technology", I primarily meant AI. I admit that the term was a bit of a stretch.

I think however that you continue to underestimate the chaotic danger and uncertainty of the situation when it comes to AI.

Poverty, education and medical treatments are but rough estimates of well-being

>Misery is just vastly overreported, because again, it generates more clicks.

... as it should be, generally. Pain and anxiety are largely more important for human survival than pleasure and reassurance.

−3

digitalthiccness t1_it20vrc wrote

> We have the least amount of war ever in the history.

Sure, but now all it'd take is one nasty one and the uninhabited surface of the planet will be glowing for several million years. Having the sword of Damocles hanging over mankind's head 24/7 isn't nothing.

>better technology has always turned out positive for us humans in the big picture, even given short term drawbacks.

So far, sure, but the more powerful technology becomes, the greater the chance that the initial drawbacks are more than we can survive. Civilization survived the invention of nuclear weapons (...so far) through little more than blind, stupid luck. There's no reason to think that it's inevitable we will always survive great leaps in technological capability.

At this point I think we have no real choice but to push forward and try to progress while avoiding the dangers, but technological advancement is an existential threat and that threat should be respected.

2

Black_RL t1_it26hl8 wrote

Because:

  • it’s fascinating
  • we’re a very curious species
  • competition

And because we must.

1

Apollo24_ t1_it28zqf wrote

Yes, you are.

You're suggesting we halt all progress because some people could use it irresponsibly. Sure then, let's ban knifes as they can be used as weapons by irresponsible people.

4

beachmike t1_it2etm9 wrote

I completely disagree. We absolutely SHOULD NOT stop AI advancements and benefits to mankind because of your hypothetical AI nightmares. All important technologies can be used for great good or great evil: the wheel, fire, nuclear power, computers, as well as AI. We don't "halt advancement" of these technologies because some evil people among us might abuse them. -

3

ouaisouais2_2 OP t1_it2ie9o wrote

If "evil people" use ASI to its fullest extent even once, then it won't be an advancement.

Let's say a warmongerer or a terrorist (Vladimir Putin for example) got their hands on this. What would happen?

1

Ortus12 t1_it2je0q wrote

Most of these big companies have Ai safety teams that ensure their algorithms are somewhat safe.

There's also a huge field in mathematics/programming on the "Ai control problem" that's proposed many algorithmic solutions for making safe Ai over the past few decades.

1

ouaisouais2_2 OP t1_it2lxdx wrote

I'm suggesting that we slow it down, put it through more law-enforced security checks and make its application a major political subject, preferably on an internation scale.

>Sure then, let's ban knifes as they can be used as weapons by irresponsible people.

No, that doesn't make sense. What does makes sense, is to not sell atomic bombs to profit hungry CEOs, terrorists or schizophrenic idiots.

1

Rogue_Moon_Boy t1_it2nizm wrote

>I think however that you continue to underestimate the chaotic danger and uncertainty of the situation when it comes to AI.

Pretty much every new technology ever in history was doomed as the end of the world initially.

>... as it should be, generally. Pain and anxiety are largely more important for human survival than pleasure and reassurance.

I disagree. It should be 50/50. A pipe dream for sure, but the current exaggeration of impending doom spread by social media and dinosaur media is just creating anxiety everywhere and a generation of doomers for no reason. It's not productive at all. Humans work best when inspired and hopeful, not if they are depressed and hopeless.

1

Apollo24_ t1_it2ucvn wrote

That is not what you were suggesting in your post at all.. you were asking why people don't try to stop AI development, not regulate.

Anyways, let's suppose that's what you were suggesting. Of course there's nothing wrong with being extra cautious, but regulations on international scales for this are just inherently impossible. Not because of greed or capitalism, AI just has such huge potential, any country slowing down their own progress would assure their economic disadvantage in the future, maybe even their destruction.

You'd probably get some EU countries to agree on such regulations, but that'd just make things worse for those countries later on.

3

ouaisouais2_2 OP t1_it3615d wrote

>Pretty much every new technology ever in history was doomed as the end of the world initially.

I doubt that people literally predicted the extinction of humanity or
dystopias in all the colors of the rainbow. Besides, all that shouldn't be a reason to not take serious predictions seriously.

We know there is a risk that is only possible with ASI/wide application of narrow AI. We know it can get unfathomably bad in numerous ways. We know it can only get unfathomably good in relatively few ways. It's highly uncertain how high the chances are that it lands on respectively bad or good.

It's only reasonable to be more patient to spend more time researching what risks we're accepting and how to lower them. I think that's the most reasonable at least on the extremely long-term

1

ouaisouais2_2 OP t1_it39a33 wrote

It might not have been very clear but I said : "inhibit or manage".

>Not because of greed or capitalism, AI just has such huge potential, any country slowing down their own progress would assure their economic disadvantage in the future, maybe even their destruction.

That's exactly what I'd call a trademark of capitalism (mixed with the idiocy of warmongering in general). People are too afraid of immediate death or humiliation to step off a road of insanity.

1

beachmike t1_it3aevx wrote

Same thing can be said of nuclear weapons. We don't shut down the nuclear energy industry because of the risk of nuclear weapons. Nuclear reactors can produce nuclear material used to make nuclear weapons.

1

ouaisouais2_2 OP t1_it3y0lf wrote

We should also have waited a while before we built that, but there was a cold war in the way. We avoided absolute calamities multiple times by luck.

We could abolish the reactors and the weapons that exist, which would require a lot of collaboration, surveillance between countries, more green energy. It's very, very ambitious but if it succeeded, nuclear war would be an impossibility.

AI and ASI are different because they're fuelled with the easily available materials, code and electricity, which provides many smaller groups with the ability of mass destruction or mass manipulation. That means not only nation states can join in, but also companies, cults, advocacy groups and maybe even individuals.

So either we spend a fortune on spooky, oppressive surveillance systems to ensure nobody's using it like dangerously or we negotiate on how we use it right, in some places at certain times in certain ways as we slowly understand it more and more.

It'd be great if we as international society could approach AI, especially ASI, extremely carefully. It is, after all, the final chapter of History as we know it.

1

FomalhautCalliclea t1_it4mseg wrote

I think a big part of the answer(s) to your questions is the insane weight of inertia and the labyrinthic difficulty to implement any policy at big scales such as required for this topic.

A very eloquent example : look at how the most minimalistic of measures against climate change were horribly impeded, diminished, botched, slowed down if not totally stalled. And we didn't even solve the problem.

Just raising the minimum wage in many countries (even the most wealthy and developped) is seen by politicians, employers and cultural elites as a daunting herculean task or hard to solve question.

2

Gilded-Mongoose t1_it6fg6j wrote

I think the alarms are both dramatic enough and are enough false alarms/alarmist that we dismiss them.

If anything it could bring about the 5th Industrial Revolution in ways that we haven’t seen before. And just as the 1st through ongoing-4th ones have been, we’ve gone along for the ride. As a species we are very, very adaptable and flexible. And singularity on its own - software, calculations, non-living incentives (most of our social malice stems from our biological mortality, which would have to be artificially programmed, and which would be weeded out by the majority of purely logical directives) - isn’t very much of a threat. It opens up far, far more progressive opportunities than threats. Far more than society’s collective creativity is even aware of yet - see how over in AI, all they’re doing is creating porn, psychedelic videos, and just generally weird or stupid concepts.

Even the scientific community is really only using it as a shortcut processing tool.

So yeah. Real life is often boring and I think that, realistically, we’re expecting relatively more of the same in that regard.

1

beachmike t1_it6jo26 wrote

I totally disagree. Nuclear energy is a clean and very safe way to meet our energy needs. The last thing we should do is abolish nuclear reactors.

You say ASI is "different" because it's fueled with "easily available materials, code, and electricity." OK, build one.

1

Quealdlor t1_it6vbfq wrote

If in the 2030s and 2040s somehow there won't be more automation, then things will start to get worse instead of getting better in the so called developed nations. Because of aging population. Robots and intelligent computers or things won't be great. Of course it comes with a risk of AIs doing something bad.

2

ouaisouais2_2 OP t1_it8bole wrote

I was presenting ASI as a technology that is extremely risky to invent, then you bring up nuclear reactors in what seems to be an attempt to disprove me by saying "we use risky technology all the time but things work out anyway". Now you claim nuclear reactors are close to risk-free, which makes the comparison irrelevant. It'd been easier to say you just don't think ASI is that risky.

>OK, build one.

I didn't say it was easy to build one, but once it is built by somebody, it can easily be distributed and run by anyone who happens to own strong computing power.

Secondly, are you interested in gaining knowledge from this exchange or are you trying to slam-dunk on an idiot? You seem to be in keyboard warrior mode all the time.

1