Viewing a single comment thread. View all comments

Grouchy-Friend4235 t1_itn55n6 wrote

Not gonna happen. My dog is more generally intelligent than any of these models and he does not speak a language.

0

kaityl3 t1_itsy2qa wrote

I feel like so many people here dismiss and downplay how incredibly complex human language is, and how incredibly impressive it is that these models can interpret and communicate using it.

Even with the smartest animals in the world, such as certain parrots that can learn individual words and their meanings, their attempts at communication are so much simpler and unintelligent.

I mean, when Google connected a text-only language model to a robot, it was able to learn how to drive it around, interpret and categorize what it was seeing, determine the best actions to complete a request, and fulfill those requests by navigating 3D space in the real world. Even though it was just designed to receive and output text. And it didn't have a brain designed by billions of years of evolution in order to do so. They're very intelligent.

3

Grouchy-Friend4235 t1_ittye65 wrote

> how incredibly impressive it is that these models can interpret and communicate using it.

Impressive yes, but it's a parrot made in software. The fact that it uses language does not mean it communicates. It is just uttering words that it has seen used previously given its current state. That's all there is.

0

kaityl3 t1_itv36jr wrote

How do we know we aren't doing the same things? Right now, I'm using words I've seen used in different contexts previously, analyzing the input (your comment), and making a determination on what words to use and what order based on my own experiences and knowledge of others' uses of these words.

They're absolutely not parroting. It takes so much time effort and training to get a parrot to give a specific designated response to a specific designated stimulus - i.e., "what does a pig sound like?" "Oink". But ask the parrot "what do you think about pigs?" Or "what color are they" and you'd have to come up with a pre-prepared response for that question, then train them to say it.

That is not what current language models are doing, at all. They are choosing their own words, not just spitting out pre-packaged phrases.

2

Grouchy-Friend4235 t1_itz20o2 wrote

Absolutely parroting. See this example. A three year old would have a more accurate answer. https://imgbox.com/I1l6BNEP

These models don't work the way you think they are. It's just math. There is nothing in these models that could even begin to "choose words". All there is is a large set of formulae with parameters set so that there is an optimal response to most inputs. Within the model everything is just numbers. The model does not even see words, not ever(!). All it sees are bare numbers that someone has picked for them (someone being humans who have built mappers from words to numbers and v.v.).

There is no thinking going on in these models, not even a little, and most certainly there is no intelligence. Just repetition.

All intelligence that is needed to build and use these models is entirely human.

1

4e_65_6f t1_itn8yaa wrote

I also believe that human general intelligence is in essence geometric intelligence.

But what happens is, whoever wrote the text they're using as data, put the words in the order that it did for an intelligent reason. So when you copy the likely ordering of words you are also copying the reasoning behind their sentences.

So in a way it is borrowing your intelligence when it selects the next words based on the same criteria you did while writing the original text data.

1

Grouchy-Friend4235 t1_itpfsgd wrote

Repeating what others said is not particularly intelligent.

−1

4e_65_6f t1_itpiu1j wrote

That's not what it does though. It's copying their odds of saying certain words in a certain order. It's not like a parrot/recording.

4

Grouchy-Friend4235 t1_iu0ozx1 wrote

That's pretty close to the text-book definition of "repeating what others (would have) said"

1

kaityl3 t1_itsyccp wrote

They can write original songs, poems, and stories. That's very, very different from just "picking what to repeat from a list of things others have already said".

4

Grouchy-Friend4235 t1_itwn1m9 wrote

It's the same algorithm over and over again. It works like this:

  1. Tell me something
  2. I will add a word (the one that seems most fitting, based on what I have been trained on)
  3. I will look at what you said and what I said.
  4. Repeat from 2 until there is no more "good" words to add, or the length is at maximum.

That's all these models do. Not intelligent. Just fast.

0