Viewing a single comment thread. View all comments

just-a-dreamer- t1_j5nv8ay wrote

An AGi is not tested against the forces of evoution and therin lies the problem and maybe our doom. Humans are imperfect for a reason, it allows us to survive.

We humans follow a system called heuristics, a method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term.

Suppose you are a doctor and face 10.000 patients in your life. The "code" for a doctor as general instruction is the Hippocratic Oath, it works good enough. Yet there may be cases where a doctor should not keep people alive at all costs.

An AGI would be "god like" in computing powers and calculate eventualities a human is not capable to comprehend.

An AGI might give people what they wish, not what they want.

One example is the famous paperclip maximizer experiment. An ASI is given the goal to produce as many paperclips as possible and proceeds by converting every atom of the observable universe into paperclips. Again, constraints can be added to the goal, but an ASI would always find a loophole:

Goal: produce at least one million paperclips. Solution: the ASI will still build infinite paperclips, because it would never assign exactly 100% probability to the hypothesis that it has achieved its goal.

Goal: produce exactly one million paperclips. Solution: the ASI will build one million paperclips, but it can never be 100% sure that they are exactly one million. It would count them again and again. To become more efficient at counting, it will increase its computational power by converting atoms into computronium (computing matter), eventually destroying the observable universe.

Goal: produce exactly one million paperclips and stop doing anything when you are 99% sure. Solution: the ASI will build one million paperclips, but it will produce a perverse instantiation of the criterion to compute the probability.

Goal: produce exactly one million paperclips and stop doing anything when you are 99% sure. The probability is assigned by an external device. Solution: the ASI will hack the device.

I could continue this game forever, and I would eventually find a satisfying solution that seems flawless. The problem is that I’m human, an ASI would be way smarter than me and it would find flaws that I couldn’t even imagine.

The problen is laid out at https://medium.com/i-human/why-we-should-really-be-afraid-of-ai-an-analysis-of-obvious-and-non-obvious-dangers-cb2dfb8f905d

2

cloudrunner69 t1_j5o3hoe wrote

Why would a superior intelligence need to make a gazillion paperclips, is there an over abundance of paper in the future? Doesn't sound like filing papers is something a superior intelligence would be doing.

1

turnip_burrito t1_j5o7uss wrote

The idea is a result of the "orthogonality thesis": the idea that goals and intelligence are two separate aspects of a system. Basically, a goal would be set and the intelligence is just a way to achieve the goal.

This kind of behavior is seen in reinforcement learning systems where humans specify a cost function, which the AI minimizes (equivalently maximizing reward). The AI will act to fulfill its goal (maximize reward) but do stupid stuff the researchers never wanted it to do, like spinning in tiny circles around the finish line of a racetrack to rack up points, for example. It's the same kind of loophole logic that comes from stories of lawyers, genies, and such that the AI agent uses to maximize reward.

It's entirely possible this method of training an agent (maximize this one loss function) is super flawed and a way better solution is yet to be created.

3

EpicProdigy t1_j5q4vwl wrote

A super human mind probably wouldnt try to make an infinite amount of paper clips. But a machine mind might. People need to stop thinking of machine intelligence as the same as human intelligence. Im sure we could give an ASI a dumb endless task, and it will diligently do that dumb task in an extremely intelligent way until the heat death of the universe (And it will try to create new technologies in an attempt to survive entropy/escape the universe just so that It can keep doing that dumb pointless task for eternity)

Thats why machine intelligence can be scary. The only aspect we might share with it is having intelligence. An intelligent mind doesn't mean it thinks and behaves like a human does.

3

Iffykindofguy t1_j5qd0fc wrote

This is so narrow minded and overconfident I find it baffling anyone put this much energy into it.

1