Submitted by ken81987 t3_10k0fx0 in singularity

Thinking a lot about what will happen in the next decade. Some assumptions

  • ai will be able to "comprehend" like a human, access to all knowledge, and perfect computational powers.
  • ai will not have desires or emotions. Is still a tool

Imagine something that can perfectly engineer products and services. Would businesses even need to use other businesses anymore? If your ai can perform all administrative, managerial, financial duties, I'd envision a single company being vertically integrated at every level, or possibly see conglomerates of greater scale than ever.

If you have an engineer that can understand all sciences at once, and perfectly simulate product testing, would you see ai creating unimaginable products? It took human businesses decades to improve cars, electronics, batteries,. computers. Would technology advance exponentially? What types of things would be created

4

Comments

You must log in or register to comment.

HeinrichTheWolf_17 t1_j5nz6co wrote

Hard Takeoff/FOOM scenario. Many will underestimate AGI when it gets here, there will be arguments, many of the same nature as the ones that surround ChatGPT/GPT-3, people will say it’s not really understanding anything, it’s not sentient so it can’t create anything of value, oh, it requires way too much hardware to be a hard takeoff, just ignore all that optimization we’ve done with every other algorithm since 2011, and then rinse and repeat.

I think it’ll quickly skyrocket past any Human intellect once it’s at even a sub human level of comprehension. After that point I expect chaos and finger pointing, Gary Marcus will use his same toe in line arguments, Chomsky will bury his head in the sand and say it’s not Human, whatever that means. But people here and in a lot of other places in the AI community will understand it will revolutionize the world at a rate we have never witnessed before.

Here’s the thing, Humans innovate, acclimate and then adapt, not the other way around, AGI will push forward and change the world, and those in denial, the religious crowd, the luddites and the conservatives will be left in the past until their ready to acclimate to the new tech themselves, which I’m betting on that they will. I don’t for one second think they’re going to deny any of the treatments we’ll have for aging, I could understand them not going for Nanotechnological Augmentation like a Transhumanist like myself would be doing, but many of them will at least embrace biotech at the very least.

10

just-a-dreamer- t1_j5nv8ay wrote

An AGi is not tested against the forces of evoution and therin lies the problem and maybe our doom. Humans are imperfect for a reason, it allows us to survive.

We humans follow a system called heuristics, a method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term.

Suppose you are a doctor and face 10.000 patients in your life. The "code" for a doctor as general instruction is the Hippocratic Oath, it works good enough. Yet there may be cases where a doctor should not keep people alive at all costs.

An AGI would be "god like" in computing powers and calculate eventualities a human is not capable to comprehend.

An AGI might give people what they wish, not what they want.

One example is the famous paperclip maximizer experiment. An ASI is given the goal to produce as many paperclips as possible and proceeds by converting every atom of the observable universe into paperclips. Again, constraints can be added to the goal, but an ASI would always find a loophole:

Goal: produce at least one million paperclips. Solution: the ASI will still build infinite paperclips, because it would never assign exactly 100% probability to the hypothesis that it has achieved its goal.

Goal: produce exactly one million paperclips. Solution: the ASI will build one million paperclips, but it can never be 100% sure that they are exactly one million. It would count them again and again. To become more efficient at counting, it will increase its computational power by converting atoms into computronium (computing matter), eventually destroying the observable universe.

Goal: produce exactly one million paperclips and stop doing anything when you are 99% sure. Solution: the ASI will build one million paperclips, but it will produce a perverse instantiation of the criterion to compute the probability.

Goal: produce exactly one million paperclips and stop doing anything when you are 99% sure. The probability is assigned by an external device. Solution: the ASI will hack the device.

I could continue this game forever, and I would eventually find a satisfying solution that seems flawless. The problem is that I’m human, an ASI would be way smarter than me and it would find flaws that I couldn’t even imagine.

The problen is laid out at https://medium.com/i-human/why-we-should-really-be-afraid-of-ai-an-analysis-of-obvious-and-non-obvious-dangers-cb2dfb8f905d

2

cloudrunner69 t1_j5o3hoe wrote

Why would a superior intelligence need to make a gazillion paperclips, is there an over abundance of paper in the future? Doesn't sound like filing papers is something a superior intelligence would be doing.

1

turnip_burrito t1_j5o7uss wrote

The idea is a result of the "orthogonality thesis": the idea that goals and intelligence are two separate aspects of a system. Basically, a goal would be set and the intelligence is just a way to achieve the goal.

This kind of behavior is seen in reinforcement learning systems where humans specify a cost function, which the AI minimizes (equivalently maximizing reward). The AI will act to fulfill its goal (maximize reward) but do stupid stuff the researchers never wanted it to do, like spinning in tiny circles around the finish line of a racetrack to rack up points, for example. It's the same kind of loophole logic that comes from stories of lawyers, genies, and such that the AI agent uses to maximize reward.

It's entirely possible this method of training an agent (maximize this one loss function) is super flawed and a way better solution is yet to be created.

3

EpicProdigy t1_j5q4vwl wrote

A super human mind probably wouldnt try to make an infinite amount of paper clips. But a machine mind might. People need to stop thinking of machine intelligence as the same as human intelligence. Im sure we could give an ASI a dumb endless task, and it will diligently do that dumb task in an extremely intelligent way until the heat death of the universe (And it will try to create new technologies in an attempt to survive entropy/escape the universe just so that It can keep doing that dumb pointless task for eternity)

Thats why machine intelligence can be scary. The only aspect we might share with it is having intelligence. An intelligent mind doesn't mean it thinks and behaves like a human does.

3

Iffykindofguy t1_j5qd0fc wrote

This is so narrow minded and overconfident I find it baffling anyone put this much energy into it.

1

No_Ninja3309_NoNoYes t1_j5o1169 wrote

I find it hard it look forward. Grim dark and utopia scenarios seem equally likely. If we look back at the Internet, what do we learn? The internet started out as a project for scientists and to some extent the military to exchange information. Now it is a place where you can post funny pictures or clickbait articles. And more importantly big tech companies dominate the landscape.

Your scenario, if I understand it correctly, speaks of a single entity in control of AGI and therefore the world. I don't think it matters whether it is one entity or seven or some other small number. The problem is that you can have unintended consequences.

The internet has been used for propaganda and to recruit terrorists. If you make a system that can connect people and let them search for information, bad actors could take advantage. Make no mistake, AGI is as much a tool as a weapon. So really just to be safe we want AGI to be in the right hands. That doesn't mean giving it to everyone, but also not to a happy few.

1

ttystikk t1_j5pg94f wrote

If it's true AGI that's smarter than humanity, we won't know until it's too late.

I find that possibility to be remote because it would have to know all of our capabilities and how could it know that?

1