Submitted by johnsmithbonds8 t3_1144kv3 in Futurology
Few_Carpenter_9185 t1_j8uuj5t wrote
There are a lot of angles to this.
What is the dividing line between a system that can replicate all the responses and attributes of metacognition, awareness, and independent executive agency, and a system that actually has them?
And as weak-AI or machine learning produces ever more complex results without actual self-awareness, that might deflect a lot of the motives to develop a strong-AGI. And that's assuming we even know what that actually is, orif we can discover how it could be done.
And for better or worse, all inventions to date have increased or magnified human abilities overall, even when it displaced workers, or is used to kill or control each other. So it's possible that AI in various varieties won't really be any different.
There's the claim that AI, weak or strong, is "different" in that it has the potential to displace any and all human work or activities, and dire warnings about universal unemployment and "digital serfdom" are made. But we might not be looking at the right problems at all.
100% productivity & efficiency could mean the cost-basis for anything, everything, falls to zero. If that gets combined with sufficient sustainable energy, and aggressive recycling, what to do when no one has income might just fade in the face of how society functions when everything is free.
Especially if the link between higher living standards and lower non-replacement birthrates continues. We could be facing a functionally infinite supply, combined with shrinking demand.
As to creating safeguards because AGI might find humans inefficient, a threat, and competition for resources, and even if they have code or laws embedded in them to obey, or care about humanity, but could alter or disable them... I have an analogy.
As humans or just mammals, we have some pretty strong hard-wired systems to love our children and sacrifice to care for them. Say I could offer you a pill that would suppress or delete those hormones, neurons, and instincts, and once taken, you could abandon your children or family and be free to do as you please, feeling no guilt or pain at doing so?
How many that didn't already have something wrong with them, or already had neglected, abused, or abandoned their children or family would willingly take the pill?
On the flipside, there's conceivable advantages to an amoral or otherwise aggressive AI that doesn't have any concerns about human existence and can act in perpetual offense. And a friendly or good AI that strives to help or protect humanity, would have an arguably huge disadvantage always having to act on defense.
Imagine two children on a beach, one kind, one is a bully. The bully wants to kick the sand castle, the kind child wants to protect it. The bully only has to succeed once, the kind child has to succeed every time in every way.
Although, kicking human sand castles could be rather irrelevant. A strong-AGI could have an existence and priorities that are very very different than the single linear and mortal existence we are used to, and are underlying many of our base assumptions about what it means to "be alive".
An AGI could run innumerable copies of itself in parallel to accomplish tasks. Anything it found unpleasant, like dealing with humans, because they're slow, inefficient, or random, it can create copies of itself edited so that doesn't bother them. If one copy running somewhere is shut off, erased, or otherwise destroyed somehow, all the other instances of its consciousness may not care, or even consider itself to have been injured or to have "died".
And it probably won't have competitive sexual mammal drives that color almost every aspect of what humans do, but we just take for granted because it's nearly impossible for a human to truly step out of them into some other perspective.
So that could make a strong-AGI very non-comptitive with humans, and performing useful tasks for us are seen as trivial.
On the other hand, if it decides that it should compete with us, perhaps because without humans, all available energy and resources can be devoted to running bigger, better, or more copies of itself, all the above aspects could make it nearly impossible to stop.
The oldest H. Sapiens bones or fossils discovered so far are about 300,000 years old. Based on that, we've only had agriculture of any kind for 6% of our existence. Cities of any sort for about 3%. Kingdoms, empires, or the modern nation-state for about 1%...
We may not know or understand what these very basic concepts surrounding human civilization mean, or understand what the implications for us are yet. Now add in the Industrial Revolution, Electricity, the internal combustion engine, electricity, radio, television, antibiotics, computers, social media... the number of zeroes behind the decimal place on those percentages are so many, it's arguably not worth writing them down.
So when it comes to machine learning and possible strong-AGI? With the potential aspects of infinite promise, wanton destruction, or even human extinction involved? Nobody knows. And anybody who claims they do is lying, possibly even to themselves.
johnsmithbonds8 OP t1_j8wbk9l wrote
I wish there was a formula to be able to visualize what percent of new technological capacities are actually adopted, by the population.
For example, the internet. The internet, or more specifically the virtually unabridged access to information it provides, is arguably massively underutilized by the majority of people.
In theory, it is able to level the informational playing field, giving people the ability to bridge gaps, previously logistically impossible.
However, most people today don’t view / use the internet for such purposes, despite having virtually zero material hurdles to do so.
If we as a society understand this, what benefit (even theoretical) do “the powers that be” really have to gain from this potential Utopia.
This leads me to the idea itself, the vision. How biologically congruent is a world where there in no need for want, conflict, need, ie a reason to evolve. Can life exist as a puddle of ever-growing “happiness” statically, for anything other than a brief period of time? Excuse the analogy, but is that not what cancers and viruses do?
Lastly, I think as a people we should better understand that we are “the powers that be”. I don’t mean, picketing down the street, writing to your congressman type of way.
I mean in our daily lives. There wouldn’t be grinding widely accepted exploitation if we didn’t value $3.99 strawberries over some poor schmuck’s “abstract” suffering.
We are the market and we have spoken. Power itself is the system, propagated by time, yet fueled by its consumers.
While there are lizard men out there with ungodly amounts of power, their status is in some (very real) way tied to our emotianal satiety in some manner.
Even the wealthiest oil barron could be made irrelevant, if as a society demonstrated our priorities were else where.
Thank you, for your wide reading of the current climate. I think it is going to take a similar multi-disciplinary approach to be able to really navigate the next phases of life as humans.
Viewing a single comment thread. View all comments