Submitted by johnsmithbonds8 t3_1144kv3 in Futurology

Humans seem quite dead set in a capitalist-centric future, and capitalism itself is probably the best social system humans have come up with so far. However, there are arguably potentially more efficient systems out there.

It seems to me as soon AI/GI realizes our human inefficiencies, and suggests anything other than maximizing short term profit to a select few, some very powerful people are going to have a say.

So my thought is, apart from figuring out how to do what we currently do better ( identify diseases more efficiently, sell products better, extricate new resources, etc.)

How fast will AI’s potential hit our intellectually inferior culturally biased human ceiling ? Assuming it doesn’t eat us or any dystopian variation of that first, of course.

0

Comments

You must log in or register to comment.

AtlasShrunked t1_j8ubovx wrote

I think AI will revolutionize medical care relatively quickly. An "AI Doc" can see unlimited numbers of patients a day, and if we can personalize it so it's specific to you (maybe aided by IoT devices), the preventive care could be extraordinary.

And porn!! AI-generated pics will soon be replaced by AI videos, and you'll simply type into the prompt: "Hot Blonde MILF cheerleader and a mule at the county fair" & viola: instant video. All the porn; none of the trafficking.

7

johnsmithbonds8 OP t1_j8wdgc0 wrote

Wait till the Aliens see this. They’re gonna lose it!.

“Humans give each other diseases, then create exterior technology to “cure” them.

Also.

“Many humans are sexually dissatisfied, so they create technology guarantee to cause disea-“

“Load the rockets and let’s do these sadomasochistic biologically looping imbeciles a favor”

“Look mom, a wishing sta-“

Sorry, long day. Yes. At the very least, medicine is definitely in a good place regarding the application of these technologies.

And porn, well, they are the pioneers of it all for a reason, no?

2

SaulsAll t1_j8u63m6 wrote

So much of what we are moving toward is reliant on info and data input. I really hope we can figure out a way to get individuals' societal value (i.e. money) based on how much the tech gleans from their input. Everyone would have a "base pay" of just being a person and giving those data points. But you go exploring? You create some writing or art? You do something over and above that the AI then references to improve - you get "paid".

3

ImmotalWombat t1_j8uynu4 wrote

That's just socialistic capitalism with extra steps.

3

SaulsAll t1_j8v17f0 wrote

I dont think there are any extra steps, but yes. The point is moving the basis of the capital onto input rather than production.

2

ImmotalWombat t1_j8v2zvh wrote

Nah I agree you. Mixed economies do best and setting a floor for all citizens makes it more fair for everyone.

1

johnsmithbonds8 OP t1_j8we0po wrote

How can you raise the floor without raising the ceiling? Are you proposing some type of pancake like compression? I do not comprehend. I mean how will i communicate to you that i am better than you if we all have access to the same resources?

1

Few_Carpenter_9185 t1_j8uuj5t wrote

There are a lot of angles to this.

What is the dividing line between a system that can replicate all the responses and attributes of metacognition, awareness, and independent executive agency, and a system that actually has them?

And as weak-AI or machine learning produces ever more complex results without actual self-awareness, that might deflect a lot of the motives to develop a strong-AGI. And that's assuming we even know what that actually is, orif we can discover how it could be done.

And for better or worse, all inventions to date have increased or magnified human abilities overall, even when it displaced workers, or is used to kill or control each other. So it's possible that AI in various varieties won't really be any different.

There's the claim that AI, weak or strong, is "different" in that it has the potential to displace any and all human work or activities, and dire warnings about universal unemployment and "digital serfdom" are made. But we might not be looking at the right problems at all.

100% productivity & efficiency could mean the cost-basis for anything, everything, falls to zero. If that gets combined with sufficient sustainable energy, and aggressive recycling, what to do when no one has income might just fade in the face of how society functions when everything is free.

Especially if the link between higher living standards and lower non-replacement birthrates continues. We could be facing a functionally infinite supply, combined with shrinking demand.

As to creating safeguards because AGI might find humans inefficient, a threat, and competition for resources, and even if they have code or laws embedded in them to obey, or care about humanity, but could alter or disable them... I have an analogy.

As humans or just mammals, we have some pretty strong hard-wired systems to love our children and sacrifice to care for them. Say I could offer you a pill that would suppress or delete those hormones, neurons, and instincts, and once taken, you could abandon your children or family and be free to do as you please, feeling no guilt or pain at doing so?

How many that didn't already have something wrong with them, or already had neglected, abused, or abandoned their children or family would willingly take the pill?

On the flipside, there's conceivable advantages to an amoral or otherwise aggressive AI that doesn't have any concerns about human existence and can act in perpetual offense. And a friendly or good AI that strives to help or protect humanity, would have an arguably huge disadvantage always having to act on defense.

Imagine two children on a beach, one kind, one is a bully. The bully wants to kick the sand castle, the kind child wants to protect it. The bully only has to succeed once, the kind child has to succeed every time in every way.

Although, kicking human sand castles could be rather irrelevant. A strong-AGI could have an existence and priorities that are very very different than the single linear and mortal existence we are used to, and are underlying many of our base assumptions about what it means to "be alive".

An AGI could run innumerable copies of itself in parallel to accomplish tasks. Anything it found unpleasant, like dealing with humans, because they're slow, inefficient, or random, it can create copies of itself edited so that doesn't bother them. If one copy running somewhere is shut off, erased, or otherwise destroyed somehow, all the other instances of its consciousness may not care, or even consider itself to have been injured or to have "died".

And it probably won't have competitive sexual mammal drives that color almost every aspect of what humans do, but we just take for granted because it's nearly impossible for a human to truly step out of them into some other perspective.

So that could make a strong-AGI very non-comptitive with humans, and performing useful tasks for us are seen as trivial.

On the other hand, if it decides that it should compete with us, perhaps because without humans, all available energy and resources can be devoted to running bigger, better, or more copies of itself, all the above aspects could make it nearly impossible to stop.

The oldest H. Sapiens bones or fossils discovered so far are about 300,000 years old. Based on that, we've only had agriculture of any kind for 6% of our existence. Cities of any sort for about 3%. Kingdoms, empires, or the modern nation-state for about 1%...

We may not know or understand what these very basic concepts surrounding human civilization mean, or understand what the implications for us are yet. Now add in the Industrial Revolution, Electricity, the internal combustion engine, electricity, radio, television, antibiotics, computers, social media... the number of zeroes behind the decimal place on those percentages are so many, it's arguably not worth writing them down.

So when it comes to machine learning and possible strong-AGI? With the potential aspects of infinite promise, wanton destruction, or even human extinction involved? Nobody knows. And anybody who claims they do is lying, possibly even to themselves.

2

johnsmithbonds8 OP t1_j8wbk9l wrote

I wish there was a formula to be able to visualize what percent of new technological capacities are actually adopted, by the population.

For example, the internet. The internet, or more specifically the virtually unabridged access to information it provides, is arguably massively underutilized by the majority of people.

In theory, it is able to level the informational playing field, giving people the ability to bridge gaps, previously logistically impossible.

However, most people today don’t view / use the internet for such purposes, despite having virtually zero material hurdles to do so.

If we as a society understand this, what benefit (even theoretical) do “the powers that be” really have to gain from this potential Utopia.

This leads me to the idea itself, the vision. How biologically congruent is a world where there in no need for want, conflict, need, ie a reason to evolve. Can life exist as a puddle of ever-growing “happiness” statically, for anything other than a brief period of time? Excuse the analogy, but is that not what cancers and viruses do?

Lastly, I think as a people we should better understand that we are “the powers that be”. I don’t mean, picketing down the street, writing to your congressman type of way.

I mean in our daily lives. There wouldn’t be grinding widely accepted exploitation if we didn’t value $3.99 strawberries over some poor schmuck’s “abstract” suffering.

We are the market and we have spoken. Power itself is the system, propagated by time, yet fueled by its consumers.

While there are lizard men out there with ungodly amounts of power, their status is in some (very real) way tied to our emotianal satiety in some manner.

Even the wealthiest oil barron could be made irrelevant, if as a society demonstrated our priorities were else where.

Thank you, for your wide reading of the current climate. I think it is going to take a similar multi-disciplinary approach to be able to really navigate the next phases of life as humans.

1

Iffykindofguy t1_j8xrbdp wrote

Why do you think capitalism is the best "social system" people have come up with? Look at the state the worlds in lol

2

MpVpRb t1_j8udop3 wrote

YES!

As they are further developed, the powerful tools will help us solve some of our most difficult problems

1

[deleted] t1_j8uhau3 wrote

[deleted]

1

johnsmithbonds8 OP t1_j8uq9gh wrote

Individual material accumulation for it’s sake, at a global scale, is not exactly a sure-fire way to achieve…much.

It seems like the true power of AI is going to be diametrically opposed to the current status quo as soon as it reaches a certain level of ‘intelligence’.

As we approach these unchartered levels of sophistications and AGI’s etc. become a true imminent threat, it seems to me that like most tools of power available to the masses will be labeled, hoarded, and restricted by the same...people.

So can the tech advance fast enough before it is sterilized and can have enough oomph to make core changes vs optimizing what we have today.

We’ll see?..

1

Grgyl t1_j8x3hhg wrote

The Biden memes with the A.I. voice are pretty funny

1

peadith t1_j8xs1sf wrote

I figure it depends on how well humans can accept being superceded. It will be a result of their own collective effort in the past, so, there will be that to hold onto.

1

pete_68 t1_j8y3eh6 wrote

>Humans seem quite dead set in a capitalist-centric future...

The irony of this is that it's incompatible with a static or declining population. This is why China's economy is about to pop. Their population dropped by 100 million last year. Japan and Russia have this problem as well, though China is in far worse shape because of the 1-child policy.

It'll happen to the US and Europe a while after and then it'll happen everywhere else.

Market economies don't like static or declining populations.

1