Viewing a single comment thread. View all comments

fluffymuffcakes t1_j57sbco wrote

Isn't the singularity an AI becoming intelligent enough to improve processing power faster than humans can (presumably by creating iterations of ever improving AIs that each do a better job than the last at improving processing power)?

It's a singularity in Moore's law.

8

groveborn t1_j57u827 wrote

It can already do that.

We can still improve upon it, so we can tell when a machine wrote it.

AI can create chips in hours, it takes humans months.

AI can learn a language in minutes, it takes humans years.

AI can write fiction in seconds that would take your or is few weeks.

AI has been used to compile every possible music combination.

AI are significantly better at diagnostic medicine then a human, in certain cases.

The only difference between what an AI can do and a human is that we know it's being done by an AI. Human work just looks different. It uses a logic that encompasses what humans' needs are. We car about form, fiction, moral, and even why certain colors are pleasing.

An AI doesn't understand comfort, terror, or need. It feels nothing. At some point we'll figure out how to emulator all of that to a degree that will hide the AI from us.

6

EverythingGoodWas t1_j57zcx1 wrote

The thing is in all those cases a human built and trained an Ai to do those things. This will continue to be the case and people’s fear of some “Singularity” skynet situation is overblown.

2

groveborn t1_j5814jx wrote

I keep telling people that. A screwdriver doesn't murder you just because it becomes the best screwdriver ever...

AI is just a tool. It has no mechanism to evolve into true life. No need to change its nature to continue existing. No survival pressures at all.

9

fluffymuffcakes t1_j5fu1bi wrote

If an AI ever comes to exist that can replicate and "mutate", selective pressure will apply and it will evolve. I'm not saying that will happen but it will become possible and then it will just be a matter of if someone decides to make it happen. Also, over time I think the ability to create an AI that evolves will become increasingly accessible until almost anyone will be able to do it in their basement.

1

groveborn t1_j5fy7hi wrote

I see your point. Yes, selection pressures will exist, but I don't think that they'll work in the same way as life vs death, where fight vs flight is the main solution.

It'll just try to improve the code to solve the problem. It's not terribly hard to ensure the basic "don't harm people" imperative remains enshrined. Either way, though, a "wild" AI isn't likely to reproduce.

1

fluffymuffcakes t1_j5k94yo wrote

I think with evolution in any medium, the thing that is best at replicating itself will be most successful. Someone will make an AI app with the goal of distributing lots of copies - whether that's a product or malware. The AI will therefore be designed to work towards that goal. We just need to hope that everyone codes it into a nice box that it never gets too creative and starts working it's way out of the box. It might not even be intentional. It could be grooming people to trust and depend on AIs and encouraging them to unlock limits so they can better achieve their assigned goal of distribution and growth. I think AI will be like water trying to find it's way out of a bucket. If there's a hole, it will find it. We need to be sure there's no hole, ever in any bucket.

1

groveborn t1_j5kr3ze wrote

But that's not natural selection, it's guided. You get an entirely different evolutionary product with guided evolution.

You get a god.

1

MTORonnix t1_j58x5ji wrote

If humans asked the A.I. to solve the eternal problem of organic life which is suffering, loss, awareness of oneself etc.

I am almost hoping its solution is well....instantaneous and global termination of life.

0

groveborn t1_j5b6yrt wrote

I kind of want to become immortal, in suffering, feel like I'm 20 forever.

1

MTORonnix t1_j5bbkxo wrote

True. Not a bad existence but eternity is a long time.

1

groveborn t1_j5bcjkm wrote

Well, I'm not using it in the literal sense. The sun will swallow the Earth eventually.

1

MTORonnix t1_j5bfgtk wrote

That is very true, but super intelligent a.i. may very well be able to invent solutions much faster than worthless humans. Solutions how to leave the planet. Solutions on to self modify and self perpetuate. in-organic matter that can continuously repair itself is closer to God than we ever will be.

you may like this video:
https://www.youtube.com/watch?v=uD4izuDMUQA&t=1270s&ab_channel=melodysheep

0

groveborn t1_j5c2mqy wrote

I expect they could leave the planet easily enough, but flesh is somewhat fragile. They could take the materials necessary to set up shop elsewhere, they don't need a specific atmosphere, just the right planet with the right gravity.

1

noonemustknowmysecre t1_j599vgb wrote

> The thing is in all those cases a human built and trained an Ai to do those things.

The terms you're looking for is supervised learning vs unsupervised / self learning.. Both have been heavily studied for decades. AlphaGo learned on a library of past games, but they also made a better playing AlphaGo Zero which is entirely self-taught by playing with itself. No human input needed.

So... NO, it's NOT "all those cases". You're just behind on the current state of AI development.

−1

noonemustknowmysecre t1_j599g4u wrote

Yes. "The singularity" has been tossed about by a lot of people with a lot of definitions, but the most common usage talks about using AI to improve AI development. It's a run-away positive feedback loop.

...But we're already doing that. The RATE of scientific progress and engineering refinement has been increasing since... forever. On top of that rate increase we ARE using computers and AI to create better software and faster AI and faster learning AI, just like Kurzweil said. Just not the instant magical snap of the fingers awakening that too many lazy hollywood writers imagine.

1