Viewing a single comment thread. View all comments

cloudrunner69 t1_j5o3hoe wrote

Reply to comment by just-a-dreamer- in how will agi play out? by ken81987

Why would a superior intelligence need to make a gazillion paperclips, is there an over abundance of paper in the future? Doesn't sound like filing papers is something a superior intelligence would be doing.

1

turnip_burrito t1_j5o7uss wrote

The idea is a result of the "orthogonality thesis": the idea that goals and intelligence are two separate aspects of a system. Basically, a goal would be set and the intelligence is just a way to achieve the goal.

This kind of behavior is seen in reinforcement learning systems where humans specify a cost function, which the AI minimizes (equivalently maximizing reward). The AI will act to fulfill its goal (maximize reward) but do stupid stuff the researchers never wanted it to do, like spinning in tiny circles around the finish line of a racetrack to rack up points, for example. It's the same kind of loophole logic that comes from stories of lawyers, genies, and such that the AI agent uses to maximize reward.

It's entirely possible this method of training an agent (maximize this one loss function) is super flawed and a way better solution is yet to be created.

3

EpicProdigy t1_j5q4vwl wrote

A super human mind probably wouldnt try to make an infinite amount of paper clips. But a machine mind might. People need to stop thinking of machine intelligence as the same as human intelligence. Im sure we could give an ASI a dumb endless task, and it will diligently do that dumb task in an extremely intelligent way until the heat death of the universe (And it will try to create new technologies in an attempt to survive entropy/escape the universe just so that It can keep doing that dumb pointless task for eternity)

Thats why machine intelligence can be scary. The only aspect we might share with it is having intelligence. An intelligent mind doesn't mean it thinks and behaves like a human does.

3