Submitted by sigul77 t3_10gxx47 in Futurology
groveborn t1_j57p6sv wrote
Reply to comment by Surur in How close are we to singularity? Data from MT says very close! by sigul77
The singularity is not AI becoming human-like intelligent, only being good enough at communication that a human can't tell it's not human.
It's kind of exciting, but not as big a deal as people here are making it out to be.
Big deal, yes, but not that big.
fluffymuffcakes t1_j57sbco wrote
Isn't the singularity an AI becoming intelligent enough to improve processing power faster than humans can (presumably by creating iterations of ever improving AIs that each do a better job than the last at improving processing power)?
It's a singularity in Moore's law.
groveborn t1_j57u827 wrote
It can already do that.
We can still improve upon it, so we can tell when a machine wrote it.
AI can create chips in hours, it takes humans months.
AI can learn a language in minutes, it takes humans years.
AI can write fiction in seconds that would take your or is few weeks.
AI has been used to compile every possible music combination.
AI are significantly better at diagnostic medicine then a human, in certain cases.
The only difference between what an AI can do and a human is that we know it's being done by an AI. Human work just looks different. It uses a logic that encompasses what humans' needs are. We car about form, fiction, moral, and even why certain colors are pleasing.
An AI doesn't understand comfort, terror, or need. It feels nothing. At some point we'll figure out how to emulator all of that to a degree that will hide the AI from us.
EverythingGoodWas t1_j57zcx1 wrote
The thing is in all those cases a human built and trained an Ai to do those things. This will continue to be the case and people’s fear of some “Singularity” skynet situation is overblown.
groveborn t1_j5814jx wrote
I keep telling people that. A screwdriver doesn't murder you just because it becomes the best screwdriver ever...
AI is just a tool. It has no mechanism to evolve into true life. No need to change its nature to continue existing. No survival pressures at all.
EverythingGoodWas t1_j581jk5 wrote
Correct. Super glad to see that other people out there understand this.
fluffymuffcakes t1_j5fu1bi wrote
If an AI ever comes to exist that can replicate and "mutate", selective pressure will apply and it will evolve. I'm not saying that will happen but it will become possible and then it will just be a matter of if someone decides to make it happen. Also, over time I think the ability to create an AI that evolves will become increasingly accessible until almost anyone will be able to do it in their basement.
groveborn t1_j5fy7hi wrote
I see your point. Yes, selection pressures will exist, but I don't think that they'll work in the same way as life vs death, where fight vs flight is the main solution.
It'll just try to improve the code to solve the problem. It's not terribly hard to ensure the basic "don't harm people" imperative remains enshrined. Either way, though, a "wild" AI isn't likely to reproduce.
fluffymuffcakes t1_j5k94yo wrote
I think with evolution in any medium, the thing that is best at replicating itself will be most successful. Someone will make an AI app with the goal of distributing lots of copies - whether that's a product or malware. The AI will therefore be designed to work towards that goal. We just need to hope that everyone codes it into a nice box that it never gets too creative and starts working it's way out of the box. It might not even be intentional. It could be grooming people to trust and depend on AIs and encouraging them to unlock limits so they can better achieve their assigned goal of distribution and growth. I think AI will be like water trying to find it's way out of a bucket. If there's a hole, it will find it. We need to be sure there's no hole, ever in any bucket.
groveborn t1_j5kr3ze wrote
But that's not natural selection, it's guided. You get an entirely different evolutionary product with guided evolution.
You get a god.
MTORonnix t1_j58x5ji wrote
If humans asked the A.I. to solve the eternal problem of organic life which is suffering, loss, awareness of oneself etc.
I am almost hoping its solution is well....instantaneous and global termination of life.
groveborn t1_j5b6yrt wrote
I kind of want to become immortal, in suffering, feel like I'm 20 forever.
MTORonnix t1_j5bbkxo wrote
True. Not a bad existence but eternity is a long time.
groveborn t1_j5bcjkm wrote
Well, I'm not using it in the literal sense. The sun will swallow the Earth eventually.
MTORonnix t1_j5bfgtk wrote
That is very true, but super intelligent a.i. may very well be able to invent solutions much faster than worthless humans. Solutions how to leave the planet. Solutions on to self modify and self perpetuate. in-organic matter that can continuously repair itself is closer to God than we ever will be.
you may like this video:
https://www.youtube.com/watch?v=uD4izuDMUQA&t=1270s&ab_channel=melodysheep
groveborn t1_j5c2mqy wrote
I expect they could leave the planet easily enough, but flesh is somewhat fragile. They could take the materials necessary to set up shop elsewhere, they don't need a specific atmosphere, just the right planet with the right gravity.
noonemustknowmysecre t1_j599vgb wrote
> The thing is in all those cases a human built and trained an Ai to do those things.
The terms you're looking for is supervised learning vs unsupervised / self learning.. Both have been heavily studied for decades. AlphaGo learned on a library of past games, but they also made a better playing AlphaGo Zero which is entirely self-taught by playing with itself. No human input needed.
So... NO, it's NOT "all those cases". You're just behind on the current state of AI development.
noonemustknowmysecre t1_j599g4u wrote
Yes. "The singularity" has been tossed about by a lot of people with a lot of definitions, but the most common usage talks about using AI to improve AI development. It's a run-away positive feedback loop.
...But we're already doing that. The RATE of scientific progress and engineering refinement has been increasing since... forever. On top of that rate increase we ARE using computers and AI to create better software and faster AI and faster learning AI, just like Kurzweil said. Just not the instant magical snap of the fingers awakening that too many lazy hollywood writers imagine.
Mt_Arreat t1_j58fudc wrote
You are confusing the Turing test with the singularity. There are already language models that pass the Turing test (LaMDA and ChatGPT).
groveborn t1_j58qdwh wrote
You might be right on that, but I'm not overly concerned. Like, sure, but I think my point still stands.
Either way, we're close and it's just not as big a deal as it's made it to be - although it might be pretty cool.
Or our doom.
path_name t1_j588kh8 wrote
i agree with your assertion, and add that humans are increasingly easier to trick due to wavering intellect
groveborn t1_j58qmnw wrote
You know, I think overall they're harder to trick. We're all a bit more aware of it than before, so it looks like it's worse.
Kind of like an inverse ... Crap. What's that term for people being too stupid to know they're stupid? Words.
path_name t1_j591owi wrote
there's truth to that. people are good at spotting stuff like bad AI content, but when it seems human and can manufacture emotional connection then it's a lot harder to say that it's not human
Viewing a single comment thread. View all comments