Viewing a single comment thread. View all comments

4e_65_6f t1_itmg5pl wrote

Yeah I was thinking about this the other day. You don't have to know what multiplication means if you knew all possible outcomes by memory. It's kind of a primitive approach but usage wise it would be indistinguishable from multiplication. I think the same thing may apply to many different concepts.

4

ReadSeparate t1_itqkpoj wrote

When GPT-3 first came out, I had a similar realization about how this all works.

Rather than thinking of a binary “is this intelligence or not” it’s much better to think of it in terms of accuracy and probabilities of giving correct outputs.

Imagine you had a gigantic non-ML computer program with billions or trillions of IF/THEN statements, no neural networks involved, just IF/THEN in, say, C++ and the output was 99.9% accurate to what a real human would do/say/think. A lot of people would say that this mind isn’t a mind at all, and it’s not “real intelligence”, but are you still going to feel that way when it steals your job? When it gets elected to office?

Behavioral outputs ARE all that matters. Who cares if a self driving car “really understands driving” if it’s safer and faster than a human driver.

It’s just a question of, how accurate are these models at approximating human behavior? Once it gets past the point of anyone of us being able to tell the difference, then it has earned the badge of intelligence in my mind.

3

4e_65_6f t1_itqt6hl wrote

>Behavioral outputs ARE all that matters. Who cares if a self driving car “really understands driving” if it’s safer and faster than a human driver.
>
>It’s just a question of, how accurate are these models at approximating human behavior? Once it gets past the point of anyone of us being able to tell the difference, then it has earned the badge of intelligence in my mind.

I think the intelligence itself comes from who wrote the ML data the AI was trained on, be that whatever it is. It doesn't have to be actually intelligent on it's own it only has to learn to mimic the intelligent process behind the data.

In other words it only has to know "what" not "how".

In terms of utility I don't think there's any difference either, people seem to be concerned with the moral implications of it.

For instance I wouldn't be concerned with a robot that is programmed to fake feeling pain. But I would be concerned with a robot that actually does.

The problem how the hell could we tell the difference? Specially if it improved on it's own and we don't understand exactly how. It will tell you that it does feel it and it would seem genuine, but if it was like GPT-3 that would be a lie.

And since we're dealing with billions of parameters now it becomes next an impossible task to distinguish between the two.

2

ReadSeparate t1_ittgzjh wrote

I've never really cared too much about the moral issues involved here, to be honest. People always talk about sentience, sapience, consciousness, capacity to suffer, and that is all cool stuff for sure, and it does matter, however, what I think is far more pressing is can this model replace a lot of people's jobs, and can this model surpass the entire collective intelligence of the human race?

Like, if we did create a model and it did suffer a lot, that would be a tragedy. But it would be a much bigger tragedy if we built a model that wiped out the human race, or if we built superintelligence and didn't use it to cure cancer or end war or poverty.

I feel like the cognitive capacity of these models is the #1 concern by a factor of 100, the other things matter too, and it might turn out that we'll be seen as monsters in the future by enslaving machines or something, certainly possible. But I just want humanity to evolve to the next level.

I do agree though, it's probably going to be extremely difficult if not impossible to get an objective view on the subjective experience of a mind like this, unless we can directly view it somehow, rather than asking it how it feels.

1