Viewing a single comment thread. View all comments

dlrace t1_j559otn wrote

interesting data, but the conflation of perfect translation with agi, let alone a singularity might raise eyebrows.

139

Surur t1_j55dsln wrote

I think there is some logic in that, in that they are saying that a perfect translation depends on a perfect understanding of the human condition.

27

adfjsdfjsdklfsd t1_j56lrry wrote

i dont think an ai needs to "understand" anything to produce certain results

30

DoktoroKiu t1_j57ewz6 wrote

It has to have an understanding, but yeah it doesn't necessarily imply a someone inside who knows anything about the human condition. It has no way to have a true internalization of anything other than how languages work and what words mean.

Maybe it is the same thing as a hypothetical man who is suspended in a sensory deprivation chamber and raised exclusively through the use of text, and motivated to translate by addictive drugs for reward and pain as punishment.

You could have perfect understanding of words, but no actual idea of the mapping to external reality.

9

2109dobleston t1_j59mo3o wrote

The singularity requires sentience and sentience requires emotions and emotions require the physiological.

2

tangSweat t1_j59zg1i wrote

At what point though if an AI can understand the patterns of human emotion and replicate them perfectly, has memories of its life experiences, forms "opinions" based on the information deemed most credible and has a desire to learn and grow that we say that it is sentient? We set a far lower bar for what is considered sentient in the animal kingdom. It's a genuine philosophical question many are talking about

3

JorusC t1_j5d6l5w wrote

It reminds me of how people criticize AI art.

"All they do is sample other art, meld a bunch of pieces together into a new idea, and synthetize it as a new piece."

Okay. How is that any different from what we do?

1

2109dobleston t1_j5avx9t wrote

Sentience is the capacity to experience feelings and sensations.

https://en.wikipedia.org/wiki/Sentience

0

tangSweat t1_j5deh0t wrote

I understand that, but feelings are just a concept of the human consciousness, they are just a byproduct of our brain trying to protect ourselves from threats back in prehistoric times. If an AGI was using a black box algorithm that we can't access or understand, then how do you differentiate between clusters of transistors or neurones firing in mysterious ways and producing different emotions. AIs like chat gpt are trained with rewards and punishment, and they are coded in a way that they improve themselves, no different really than how we evolved except at a much faster pace

2

DoktoroKiu t1_j5ai7o2 wrote

I would think an AI might only need sapience, though.

0

noonemustknowmysecre t1_j598qt0 wrote

I think people put "understanding" (along with consciousness, awareness, and sentience) up on a pedestal because it makes them feel special. Just another example of egocentrism like how we didn't think animals communicated, or were aware, or could count, or use tools, or recreation.

Think about all the philosophical waxing and poetical contemplation that's gone into asking what it means to be truly alive! ...And then remember that gut bacteria is most certainly alive and all their drivel is more akin to asking how to enjoy the weekend.

6

Surur t1_j56m3cz wrote

But it has to understand everything to get perfect results.

−1

EverythingGoodWas t1_j57zs70 wrote

No it doesn’t. We see this displayed all the time in computer vision. A yolo model or any other CV model doesn’t understand what a Dog is, it just knows what they look like based on a billion images it has seen of them. If all of a sudden some new and different breed of dog appeared people would understand it was a dog, a CV model would not.

10

PublicFurryAccount t1_j58ye2i wrote

This is a pretty common conflation, honestly.

I think people assume that, because computers struggled with it once, there's some deeper difficulty to language. There isn't. We've known since the 1950s that language has a pretty low entropy. So it shouldn't surprise people that text prediction is actually really really good and that the real barriers are ingesting and efficiently traversing.

ETA: arguing with people about this on Reddit does make me want to bring back my NPC Theory of AI. After all, it's possible that a Markov chain really does have a human-level understanding because the horrifying truth is that the people around you are mostly just text prediction algorithms with no real internal reality, too.

9

JoshuaZ1 t1_j5bryem wrote

I agree with your central point but I'm not sure when you say:

> If all of a sudden some new and different breed of dog appeared people would understand it was a dog, a CV model would not.

I'd be interested in testing this. Might be interesting to train it on dog recognition on some very big data set and deliberately leave one or two breeds out and then see how well it does.

4

Surur t1_j598x1z wrote

You are kind of ignoring the premise, that to get perfect results, it needs to have a perfect understanding.

If the system failed as you said, it would not have a perfect understanding.

You know, like you failed to understand the argument as you thought it was the same old argument.

−1

LeviathanGank t1_j56o1vt wrote

but it has to understand nothing to get preferred results.

7

groveborn t1_j57p6sv wrote

The singularity is not AI becoming human-like intelligent, only being good enough at communication that a human can't tell it's not human.

It's kind of exciting, but not as big a deal as people here are making it out to be.

Big deal, yes, but not that big.

4

fluffymuffcakes t1_j57sbco wrote

Isn't the singularity an AI becoming intelligent enough to improve processing power faster than humans can (presumably by creating iterations of ever improving AIs that each do a better job than the last at improving processing power)?

It's a singularity in Moore's law.

8

groveborn t1_j57u827 wrote

It can already do that.

We can still improve upon it, so we can tell when a machine wrote it.

AI can create chips in hours, it takes humans months.

AI can learn a language in minutes, it takes humans years.

AI can write fiction in seconds that would take your or is few weeks.

AI has been used to compile every possible music combination.

AI are significantly better at diagnostic medicine then a human, in certain cases.

The only difference between what an AI can do and a human is that we know it's being done by an AI. Human work just looks different. It uses a logic that encompasses what humans' needs are. We car about form, fiction, moral, and even why certain colors are pleasing.

An AI doesn't understand comfort, terror, or need. It feels nothing. At some point we'll figure out how to emulator all of that to a degree that will hide the AI from us.

6

EverythingGoodWas t1_j57zcx1 wrote

The thing is in all those cases a human built and trained an Ai to do those things. This will continue to be the case and people’s fear of some “Singularity” skynet situation is overblown.

2

groveborn t1_j5814jx wrote

I keep telling people that. A screwdriver doesn't murder you just because it becomes the best screwdriver ever...

AI is just a tool. It has no mechanism to evolve into true life. No need to change its nature to continue existing. No survival pressures at all.

9

fluffymuffcakes t1_j5fu1bi wrote

If an AI ever comes to exist that can replicate and "mutate", selective pressure will apply and it will evolve. I'm not saying that will happen but it will become possible and then it will just be a matter of if someone decides to make it happen. Also, over time I think the ability to create an AI that evolves will become increasingly accessible until almost anyone will be able to do it in their basement.

1

groveborn t1_j5fy7hi wrote

I see your point. Yes, selection pressures will exist, but I don't think that they'll work in the same way as life vs death, where fight vs flight is the main solution.

It'll just try to improve the code to solve the problem. It's not terribly hard to ensure the basic "don't harm people" imperative remains enshrined. Either way, though, a "wild" AI isn't likely to reproduce.

1

fluffymuffcakes t1_j5k94yo wrote

I think with evolution in any medium, the thing that is best at replicating itself will be most successful. Someone will make an AI app with the goal of distributing lots of copies - whether that's a product or malware. The AI will therefore be designed to work towards that goal. We just need to hope that everyone codes it into a nice box that it never gets too creative and starts working it's way out of the box. It might not even be intentional. It could be grooming people to trust and depend on AIs and encouraging them to unlock limits so they can better achieve their assigned goal of distribution and growth. I think AI will be like water trying to find it's way out of a bucket. If there's a hole, it will find it. We need to be sure there's no hole, ever in any bucket.

1

groveborn t1_j5kr3ze wrote

But that's not natural selection, it's guided. You get an entirely different evolutionary product with guided evolution.

You get a god.

1

MTORonnix t1_j58x5ji wrote

If humans asked the A.I. to solve the eternal problem of organic life which is suffering, loss, awareness of oneself etc.

I am almost hoping its solution is well....instantaneous and global termination of life.

0

groveborn t1_j5b6yrt wrote

I kind of want to become immortal, in suffering, feel like I'm 20 forever.

1

MTORonnix t1_j5bbkxo wrote

True. Not a bad existence but eternity is a long time.

1

groveborn t1_j5bcjkm wrote

Well, I'm not using it in the literal sense. The sun will swallow the Earth eventually.

1

MTORonnix t1_j5bfgtk wrote

That is very true, but super intelligent a.i. may very well be able to invent solutions much faster than worthless humans. Solutions how to leave the planet. Solutions on to self modify and self perpetuate. in-organic matter that can continuously repair itself is closer to God than we ever will be.

you may like this video:
https://www.youtube.com/watch?v=uD4izuDMUQA&t=1270s&ab_channel=melodysheep

0

groveborn t1_j5c2mqy wrote

I expect they could leave the planet easily enough, but flesh is somewhat fragile. They could take the materials necessary to set up shop elsewhere, they don't need a specific atmosphere, just the right planet with the right gravity.

1

noonemustknowmysecre t1_j599vgb wrote

> The thing is in all those cases a human built and trained an Ai to do those things.

The terms you're looking for is supervised learning vs unsupervised / self learning.. Both have been heavily studied for decades. AlphaGo learned on a library of past games, but they also made a better playing AlphaGo Zero which is entirely self-taught by playing with itself. No human input needed.

So... NO, it's NOT "all those cases". You're just behind on the current state of AI development.

−1

noonemustknowmysecre t1_j599g4u wrote

Yes. "The singularity" has been tossed about by a lot of people with a lot of definitions, but the most common usage talks about using AI to improve AI development. It's a run-away positive feedback loop.

...But we're already doing that. The RATE of scientific progress and engineering refinement has been increasing since... forever. On top of that rate increase we ARE using computers and AI to create better software and faster AI and faster learning AI, just like Kurzweil said. Just not the instant magical snap of the fingers awakening that too many lazy hollywood writers imagine.

1

Mt_Arreat t1_j58fudc wrote

You are confusing the Turing test with the singularity. There are already language models that pass the Turing test (LaMDA and ChatGPT).

4

groveborn t1_j58qdwh wrote

You might be right on that, but I'm not overly concerned. Like, sure, but I think my point still stands.

Either way, we're close and it's just not as big a deal as it's made it to be - although it might be pretty cool.

Or our doom.

1

path_name t1_j588kh8 wrote

i agree with your assertion, and add that humans are increasingly easier to trick due to wavering intellect

1

groveborn t1_j58qmnw wrote

You know, I think overall they're harder to trick. We're all a bit more aware of it than before, so it looks like it's worse.

Kind of like an inverse ... Crap. What's that term for people being too stupid to know they're stupid? Words.

2

path_name t1_j591owi wrote

there's truth to that. people are good at spotting stuff like bad AI content, but when it seems human and can manufacture emotional connection then it's a lot harder to say that it's not human

2

r2k-in-the-vortex t1_j57oh0j wrote

Yeah... that is maybe stretching it. Worthwhile thing to notice though is this linear trend. Never mind parity with human translated text, if the trend continues it will reach zero editing needed which would be really something.

Still, while language modeling is amazing, can it really be extended to more general tasks? I don't think it's so straightforward matter. It's well exemplified that language models don't do arithmetic or logic, getting around that bottleneck is not trivial. Getting it to work reliably even less so. And then you need to get the AI to write some internal self referencing, self correcting monologue to break down and solve more complex tasks.

I don't think it's terribly clear what all the challenges involved even are. We don't really understand how our own intelligence works, so it's not like we can mimic nature here.

3