Submitted by mocha_sweetheart t3_zvxem0 in singularity

Barring some contrived scenario where the AI escapes or turns against the rich. Someone mentioned a few days ago there are efforts to make the singularity awful for everyone by people lobbying to ban the use of AI for some things like art, but let’s be honest, the rich will still be able to use it, just not the general population.

And I’m quite skeptical of Elon Musk being on the board for OpenAI, and other billionaires on other AI projects. I try to be optimistic that maybe collaboration for the benefit of all will win out over competition for the sake of the rich, but it’s just as possible things will go the other way around which is scary.

Thoughts on this please?

48

Comments

You must log in or register to comment.

TFenrir t1_j1rrbr5 wrote

Two things:

  1. The only reason that the rich would have for keeping it out of the hands of the non rich is that it would somehow negatively impact them if the non rich had this technology.

  2. The nature of much of this AI doesn't lend itself to being restricted very easily. It's too reproducible, given enough time. It would require the synchronized efforts of people and governments across the world to keep it out of the hands of the select few

The reason why the rich have things that we (I'm putting myself in the category of non rich, but compared to some in the world I'm incredibly wealthy) don't have isn't primarily because they don't want us to have it, but because those things cost more money than we can afford, and they have no problems affording it.

Removing the barrier of cost generally makes things accessible to everyone - this is why smartphones are prolific, for example, even in the developing world.

51

g51BGm0G t1_j1rs6c6 wrote

It already started... they block many queries, today.

4

DukkyDrake t1_j1rsnli wrote

It very well could, assuming the corps could keep the secret sauce under wraps. I think that's unlikely. The lowest paid members of the dev team will bail and form their own startup to get a bigger slice of the pie. Their dev teams will similarly fragment eventually. Gov could get involve and step on anyone trying to replicate, they can do anything by just saying national security.

There is a risk of the first to cross the finish line giving everyone else in the world the shut out.

20

No_Ninja3309_NoNoYes t1_j1rt3y8 wrote

Government pressure and public outcry. Some parts of AI technology are open to the public The others could be leaked

1

mocha_sweetheart OP t1_j1rt6qv wrote

Yes, but who’s to say the barrier of cost will be taken down? It’s not as easily reproducible as you say, training it takes tons of data and computing power etc.

For all we know if things like posthumanism etc. become real they might as well just charge an unreasonable price and only the select few will get it, leading to a gattaca-like scenario in real life. These sorts of dystopias are a genuine possibility with the advancements we’re seeing.

1

mocha_sweetheart OP t1_j1rtpgk wrote

“Government pressure” doesn’t amount to much when lobbying politicians with wealth beyond most of us can imagine, bribery etc. are VERY well known as things billionaires often do to further their goals; Pardon my vulgarity, but the government has long been the billionaires’ bitch, at least in the US. For example, you know oil moguls lobby politicians to ban green energy in certain areas so they remain reliant on oil? Let alone with some newfangled AI technology which has even MORE of an incentive to keep them in power with.

Public outcry? Yeah, good luck with that. Rich people have already killed people who lobby tons of times in the past, like the coca cola company paying assassins to do that etc. I’m sure they’d have no issues doing it when they can get an AI to do the same level or better of work, or just threaten everyone to fall back in line.

9

jeffkeeg t1_j1ru1ob wrote

Artificial intelligence, at least at its current state, is a lot like nukes.

The only secret was that it's possible. Nothing stops the non-elites from banding together and doing it themselves.

One thing AI has over nukes however is that it's easier to procure computational power than fissile material, at least mildly.

5

4e_65_6f t1_j1rudlt wrote

My (wildly speculative and somewhat pessimistic) thoughts on how this will go:

-Elites will very likely have exclusive access to the best models and smartest AI at first.
-A certain company will achieve complete monopoly of the labor market by creating some AGI model that can replace any worker.
-A massive push for economic change will start (being the sides UBI vs AI ban)
-The company (having now complete monopoly of the labor market) will realize there's no profit to be made from a market which no buyer has a source of income.

After that moment, there's no reason to reserve the benefits of AI to yourself. There's no cost in production and no profit in selling products. And a whole bunch of people are angry at you taking away their jobs. So what reason would anyone have to deny people access to your automatic production in that situation?

10

sumane12 t1_j1rw5sm wrote

There's a lot to digest here so I'll start with the basics and get deeper;

  1. cost. The lower price you can make something, the more customers you will have and so the more profit can be made. This is why things tend to come down in price as sellers try to undercut each other. This will happen with AI as well, I'm sure you have heard of stable diffusion?

  2. open source community. GitHub is literally testimony to the fact that no matter how divided we become, a community of like-minded individuals are the best at solving problems. There are some genius individuals that put amazing code on GitHub for free, so let's assume someone develops AGI to keep for themselves, some genius will hack into the network, reverse engineer it and release a freeware version.

  3. the main players seem to want to keep us plebs involved. If you look at deep mind, open AI and stability AI, they all not only want feedback from the general public, but want our options moving forward. They all seem focused on the reason to develop AI is to solve all of humanities problems and in our path towards post scarcity, they all believe some form of ubi will be necessary.

  4. most people are generally good, regardless of whether or not they are billionaires, they want humanity to succeed, as a whole. Obviously this isn't everyone but generally, they want something that is mutually beneficial. If you think about what we have compared to what billionaires have (forget bank balance), it's not that different, when you compare what we have compared to kings and emperor's of the past. The only major difference is waiting for things, billionaires don't have to wait.

  5. what do they gain by denying us access to these technologies? Ultimately if we are in a post scarcity environment, they don't require our cheap labour anymore. This was the strongest determining factor in creating inequality in the past. I think most of us would consider uplifting animals such as gorillas and chimps so for the "elites" to withhold this out of spite... Sorry I just don't see it. Now as I say some people might be like that, but most won't be, and all it takes is 1 person to let the digital cat out of the metaphorical bag

These are just the musings of a crazy optimist, so might in time be proved wrong, but if we look at how quickly people are being pulled out of abject poverty over the past 100 years, it should give us some really good hope for the future.

14

TemetN t1_j1rwj3k wrote

Self interest. Why in the world would they not sell it to everyone? Further this stands in ignorance of how many of these systems work. Health benefits for example are covered in most developed nations - and even in the US most of the population has access to them. Fundamentally these arguments rely on the reader being unwilling to analyze them, in practice tech will continue to improve society.

1

Cryptizard t1_j1rxto1 wrote

>For all we know if things like posthumanism etc. become real they might as well just charge an unreasonable price and only the select few will get it

Why don't only rich people have electric cars or sweet gaming computers or literally any other new technology? Because they want to make money and they can make more money and be more rich by selling that shit to the public. It is called capitalism.

12

Cryptizard t1_j1ry5sp wrote

When have rich people ever tried to keep a technology to themselves? It doesn't make sense on its face. The only things that are exclusive to rich people are very rare and supply can't be increased, like real estate, precious gems, supercars, etc.

0

TheDavidMichaels t1_j1ryhxd wrote

The concept of posthumanism, which refers to the transformation of humanity through the use of technology, and the singularity, which refers to a hypothetical future point at which technological progress will accelerate beyond humanity's ability to comprehend or control it, are theoretical ideas that are not yet a reality and likely never will be. As such, it is not accurate to say that the benefits of these concepts are certain or will necessarily occur.

It always feel like lazy commie nerd who want to stay how jerking off to furry porn that concern themselves with not working so much.

billionaires are not necessarily the ones who create new technologies, including AI and other emerging technologies. They may invest in and support the development of these technologies, but they do not necessarily have the technical expertise to create them or control them. This is typically the domain of researchers and engineers better know as the poor, who specialize in these areas and have the necessary knowledge and skills to develop new technologies.

I do agree that it is important to consider the potential consequences of allowing certain technologies to be reserved for the elites, as this could lead to further inequalities and divides within society. It is therefore important to consider and address these issues as we move forward with the development and deployment of these technologies. But to assume some many thing with only opinions just does not look creditable

2

TheDavidMichaels t1_j1rzuh8 wrote

This is childish. The trend of compute power continuing to trend towards zero cost will continue!!! With compute power continuing to trend towards zero cost and the models already out there for free, millions are using it for free, paid for by someone!! Are there evil people? Yes, but that's just half the world. There is another half that acts to help

3

TFenrir t1_j1rzvio wrote

>Yes, but who’s to say the barrier of cost will be taken down? It’s not as easily reproducible as you say, training it takes tons of data and computing power etc.

It's easy enough, we have for example about a dozen open source language models that we can run, and the quality of them improves even as their size decreases (Flan-T5 is a good example). We can also see that with image models - cheaper, faster, and more variety. We already have quite a few different models, not including fine tuned off shoots.

And even for new models today, the costs are measured in the hundreds of thousands to millions. Those costs, while not cheap enough for me to build, are still incredibly cheap. And it will get cheaper to build the same models as techniques improve and hardware improves, there's no reason that trend wouldn't continue.

> For all we know if things like posthumanism etc. become real they might as well just charge an unreasonable price and only the select few will get it, leading to a gattaca-like scenario in real life. These sorts of dystopias are a genuine possibility with the advancements we’re seeing.

But that's a fear based conclusion, not something you are really coming to from an informed place. This isn't how technology has worked so far, and technology has made us all more "powerful", the internet, smartphones, and now these models. Why assume that at some arbitrary point this will suddenly no longer be possible? Why assume that the world is filled with mustache twirling rich villains?

13

TFenrir t1_j1s00ig wrote

And yes, this is maybe the greatest point. If any of this technology can be monetized, people are incentivized to make it cheap enough for having as many customers as possible.

7

AbeWasHereAgain t1_j1s5kfv wrote

Because the same tools can be used to take them down. I just never understand all the hand wringing over this. If an AI can put most people out of work, than an AI can be used to bypass the system.

2

christianCowan t1_j1sd7c5 wrote

Absolutely the elites chips are just going to work differently… lying for them and what not

2

IronJackk t1_j1sfmuc wrote

Because all ai will ever be is some cheap processing power and a few lines of code. The idea that it wouldn't be democratized is silly.

0

mocha_sweetheart OP t1_j1sfzya wrote

The whole point of the singularity is that AI could be sentient and even more capable than humans someday; by that logic humans are just a bio mechanical machine on a big scale that’s made to reproduce while not expending too much energy on thinking neurons (which is why human brains are bad at computation, prediction, heuristic analysis, avoiding logical biases etc. etc.). It’s like calling the Great Wall of China “just a building” or calling the Louvre and the history behind it “just a collection of paintings.” You’re missing the big picture and how it effects the culture and world around it, etc.

1

72414dreams t1_j1sn1ga wrote

“This grand carcass yet” is an excellent sci fi short story on this subject.

2

ClubZealousideal9784 t1_j1sn2yk wrote

Controlling something way smarter than us forever? Doesn't that sound like the old the world around us humans arguments? A new more form of consciousness isn't something to be controlled or used.

1

Gotisdabest t1_j1sn4a5 wrote

I guess this argument works in a setting where gradual change occurs, but taking the end point only, in a theoretical post scarcity world(for the rich) there's no real incentive to spread this tech. I agree that at least relatively gradual growth is far more likely and hence we are going to get the incremental improvements at the same time as them, but it's worth noting that capitalism isn't exactly the best answer to the question of what helps the people when capitalism breaks down

4

solomongothhh t1_j1sv14n wrote

that's why effective accelerationism is key, you accelerate the progress where no institution or organization can react to the change, you destabilize the whole old structure and you don't do it with a whimper but with a bang, all of over-regulation and politicians getting in the way of progress will only produce a reality like atomic energy, where it's getting banned to create energy from it for general use and benefit but they still make bombs with it in bulk that keeps all of the humanity hostage to a couple of powerful people who can decimate this planet 10 times over with their stockpiles, when you see laws being made about the technology, know it's not to protect the general public but to land power in the laps of the few, that why accelerate the process and make the singularity come sooner than later, give them no time to react or horde it for themselves.

2

Zermelane t1_j1svt44 wrote

> I’m quite skeptical of Elon Musk being on the board for OpenAI

Plenty of reason to be skeptical indeed, because he isn't! Hasn't been for almost half a decade.

> there are efforts to make the singularity awful for everyone by people lobbying to ban the use of AI for some things like art, but let’s be honest, the rich will still be able to use it, just not the general population

This has some plausibility, but let me throw a couple of random points at it:

It depends on the government being really awful. Which it is, but only so awful, and generally less so in liberal societies. Governments might do things like kill babies by banning parenteral nutrition with a healthy distribution of fatty acids, but most didn't ban cellphones, the Internet, CRISPR, etc..

Also: These days governments have foresight in the sense that they might well ban things ahead of time, preventing even the rich from getting them. For instance, maybe you saw that recent story about trying to finally get the FDA to allow aging to be treated as a disease. If you're considering working on an aging treatment, and you are greedy, you want to sell it to everyone; but if the government says you won't be allowed to sell it to anyone, then what you do won't be that you will work on it and then only sell it to rich people, you just won't work on it at all.

2

blueSGL t1_j1sx169 wrote

pre-AGI

Mass poverty is destabilizing, destabilization is bad for business. Automation/AI will come at different rates, it won't be uniform or instantaneous.

Big chunks of the economy will either be massively assisted or replaced by AI (likely one then the other), those people need to be supported or they will be unable to buy the products and services that are being automated in the rest of the economy.

This will cause enough problems that UBI will have to happen. Governments/billionaires can't just sit back and watch the fireworks with Automation/AI providing them everything, that point won't have been reached yet. They will still need sectors that are not automated to continue working.

Post-AGI

Assumptions are made in the OP that whoever is the first to crack AGI also cracks alignment, we get exactly one chance at that.
I highly recommend Nick Bostrom's Superintelligence for an in depth look at all the ways 'obvious' solutions can go wrong, and some solutions for it going right. Funnily enough the ones for it going right are generally by asking the AI to do (and I'm massively paraphrasing ) "the best thing for humanity" and for that exact goal to be worked out by the AI itself, the nuances, the balancing act.

In such a scenario, (that being one of the safest ways to handle alignment is to hand the problem off to the AI itself) the solution would not lend itself to billionaires. The more you drill down and define the goal function the higher the likelihood you will fuck everything up during the one chance humanity has to get things right.

Either the light cone is gonna be paperclips or we might end up with a half decent post scarcity society.

1

Lawjarp2 t1_j1t0wpg wrote

Elites in most democratic countries are elites because of money which will become redundant with AI. If you make all your money selling consumer goods and there are no consumers you won't be so rich. There are also people who are truly resource and production rich and they will prosper with AGI.

But there are also other kind of elites, power elites, who mostly are in the goverment or other big entities that nearly like one(say like samsung in Korea). These are going to have the greatest incentive to keep AI for themselves. So wherever there are more elites with unchecked powers like in China, Russia, Iran etc it is likely the people will get ignored or kept barely satisfied cut off from rest of the world.

0

PulsatingMonkey t1_j1t3nbh wrote

I blame Hollywood for cultivating this mindset in people: what makes a good story does not necessarily comport with reality. Look at GATTACA's portrayal of genetic modification as something reserved for the elite when all *real* trendlines actually point to it becoming cheap and widely available. In fact, the *real* concern among experts is that it becomes TOO cheap and TOO available that bad actors are able to use it for their purposes e.g. engineering bio-weapons.

So what worries me about futuristic tech like AGI and gene stuff is not the elite but the general population not being mature or 'evolved' enough for it because I know I trust Bill Gates more than most people.

0

raubhill t1_j1t6lh3 wrote

A society widely using AI will dominate one that restricts its use to the wealthy.

5

casual-existence t1_j1tgi8q wrote

Everyone is saying “the rich” as if humans become a different species once given lots of money. It’s not that simple. It is as if you are referring to a close knit and singular culture among rich people. AI will reveal more than people seem to think, it will reveal our faults just as it will foster our talents. For example, imagine a bit that understands morality, it doesn’t have to feel it, just understand. This theoretical bot can run through a company’s full computer system and scan for instances of child labor, unsanitary conditions, sexual abuse, and probably a helluva lot more. One may ask; “couldn’t the company refuse?” Yes, they could, but that’d make everyone question why. Money can be used to avoid consequence, no doubt. But this technology has the potential to hold these people accountable regardless of if they are “good” or not. These people control more than a usual person does, this means that their actions make larger impacts. Imagine you had that much power, that much responsibility, every breakdown every mistake everything you do send ripples out to the world, to the people. Know this, faults have always been easier to see and remember than merits, for we constantly hold ourselves and others to the expectation of perfection. In a way humans are inherently selfconscious in a very negative way. And we project this effect more heavily on people and institutions that we don’t understand, could you imagine that rich life? Remember, contempt is usually the default in modern society, so always question it when you feel that way. Knowledge kills hatred, wisdom is the key.

I need to go to bed just watched avatar way of the water it was so great why did I write this it’s so late, question what’s certain, and discover your fate, master your environment, and you will be great, gather your knowledge, and kill the hate. Thank you everybody, and goodnight.

1

iateadonut t1_j1tmhxn wrote

The Matrix script was originally that humans were used for our processing power (instead of the dumb idea that we somehow generate energy without input that ultimately comes from the sun).

Besides all the other good points in this thread (concerning open-source software, etc), there's a good probability that the nature of work will change, as human brains are extremely good at performing certain tasks with far less energy input than computers. One possible scenario is that we are utilized as a distributed super-computer. It will be interesting to see what types of human-machine interfaces will be utilized.

1

Mastermind1776 t1_j1tn686 wrote

My main critique of your premise (though I could be misunderstanding your point) is that even in a post scarcity society with capitalist elements (for the rich) there will always be a use for more money. Post-scarcity (based on my understanding of the term) mainly applies to all basic needs being taken care of, but there will likely be some exotic desires that still have some monetary or cultural limits placed in it.

However, what often seems to drive the rich is building net worth in order to start new businesses in new marketing niches. All it takes is one entrepreneur to mass market the AI/singularity/post-human tech and scale it to the rest of the populations at an appropriate price. Another avenue is if a capable entrepreneur or group open sources the tech and it scales in a DIY-type way.

The rich and powerful are heterogenous (like the rest of us) in their motivations so all it takes is one soul to break the mold and “tradition” and give accessible access to the tech. This is under the assumption that the tech doesn’t have some fundamental aspect that makes it impossible to scale the cost down or distribute it widely.

3

Gotisdabest t1_j1tpsxr wrote

>mainly applies to all basic needs being taken care of, but there will likely be some exotic desires that still have some monetary or cultural limits placed in it.

In an ai manufactured post scarcity it's quite likely that more or less every desire will be taken care of. Cultural limits, quite possibly, but those can't be fixed with any kind of money when the person you're selling to has no desires.

>The rich and powerful are heterogenous (like the rest of us) in their motivations so all it takes is one soul to break the mold and “tradition” and give accessible access to the tech.

The thing is that in a rate of progress so fast, chances are that a remarkably select few may be in charge while the rest are simply unable to cope with the change. There will essentially be no new niche to conquer in terms of business. Once we hit basic post scarcity more extensive post scarcity won't really be far behind, and then power will be the only possible commodity, lying with those who may decide to abuse it or exclude others from it(it is quite reasonable to think that the rich class does include an abnormally high number of empathy lacking people in general).

2

ThoughtSafe9928 t1_j1ttw2p wrote

Both examples you provided are very clearly not the same as AI or Internet aka transformative technologies.

You literally can’t limit “post-singularity” benefits to one thing. that’s like saying “cure to cancer”. sure, there may at one point be a singular cure to the thousands of diseases called “cancer”, but there’s still going to be individual cures that are immensely helpful. open source stable diffusion vs openAI Dall-E is a great current example of why you can’t possibly expect this technology to be limited somehow.

1

Jakeattack77 t1_j1ttyeb wrote

If definitely will be at first and not just that but they will be tools of capital to continue to drive collapse if we don't fit to take it back.

Also y'all please watch AMC's Pantheon it's the absolute perfect singularity tv that also lead me to here and have been thinking about singularity ever since I watched it

1

mocha_sweetheart OP t1_j1tu3ok wrote

Orion’s Arm is the absolute perfect singularity IMO, e.g realistic methods for body switching, humans actually becoming as intelligent as AI singularities, uploading, virtual worlds etc. and it even gets into a concept of more singularities where one stage is incomprehensible to the previous but then there are many stages of that where the highest one is absolutely godlike in terms of intelligence, technology, awareness etc.

1

ReasonablyBadass t1_j1tui7p wrote

That's why we need to OpenSource as much as possible and fight copyright and patent law.

6

SmoothPlastic9 t1_j1tuxm4 wrote

with AI couldnt you technically just take over an entire country and monitor literally every action there is?If someone that has the best and smartest AI they could prob get away with everything

1

BuscadorDaVerdade t1_j1ua9j5 wrote

Private islands are nothing futuristic. Medieval kings had those too. Flying cars and private jets are expensive to manufacture. If they were cheap like smartphones, more people would have them, although we might run into issues with air space scarcity.

2

Molnan t1_j1v6ccj wrote

That's a bit like someone in the early 20th century wondering whether cars, plastics and chemical fertilizers will be reserved for the elites, while the poor keep riding horses, using wooden and metal objects and eating organic food. See how quaint that concern sounds? In a similar way, most post-singularity wonders we anticipate, like enduring perfect health and youth and a seemingly luxurious lifestyle , will be cheap parlor tricks nobody will care about, let alone trying to keep them away from the masses.

And to what end would they even try, when AI agents will be better servants than any human slave ever could? In fact, the main risk involving evil elites is that they may simply decide to get rid of everyone else, but they just won't have the political power to do that.

I do expect that there will be conflict about how resources are used, what risks are deemed acceptable and what governance structures and bodies should call the shots, but the relevant issues will be on a scale spanning the whole galaxy and beyond and millions of years into the future.

1

OhSeeker t1_j1v6k08 wrote

It will be orchestrated by the elites and carried out by incels.

1

Depression_God t1_j1vdohz wrote

Just like any other new technology, it will be reserved for elites until the cost comes down to the point that normal people can afford it. If it becomes incredibly powerful, it's likely the elites will try to prevent it from leaking out to the public. But once it does get leaked, the genie can't be put back into the bottle. At that point, every authoritarian country will start shooting themselves in the foot by desperately banning it.

1

ngnoidtv t1_j1wh84k wrote

Support and contribute to open source projects and never, ever pay to use AI tools.

2

tjr5zz t1_j1wnp6j wrote

The nerds will get there first. Why would they turn it over to the "elites" when they reach god-like power first.

1

mj-gaia t1_j1wslop wrote

Because that would be the point where everyone below them comes together to kill them all

jk

1

Sandbar101 t1_j1xdx6q wrote

Because if AI enables the existence of a true post scarcity economy there will no longer be anything to reserve. We will all become The Elites.

3

Exel0n t1_j23ayii wrote

80% of human population in modern day "developed" countries used to be subsistence farmers, peasants.

that's why you have so much stupidity going on. all those fear-mongering actually works on them. those low IQ should-be-peasants dont understand economics, history, tech development. they really do think everything good will be elites-only.

1