Viewing a single comment thread. View all comments

jms4607 t1_j09860v wrote

You could argue a LLM trained with RL like ChatGPT has intent in that is aware it is acting in an MDP and needs to take purposeful action.

0

ReginaldIII t1_j0b9rwb wrote

RL is being used to apply weight updates during fine tuning. The resulting LLM is still just a static LLM with the same architecture.

It has no intent and has no awareness. It is just a model, being shown some prior, and being asked to sample the next token.

It is just an LLM. The method of fine tuning just creates a high quality looking LLM for the specific task of conversationally structured inputs and outputs.

You would never take your linear regression model that happens to perfectly fit the data, take a new prior of some X value, see that it gives a good Y value that makes sense, and come to the conclusion "Look my linear regression is really aware of the problem domain!"

Nope. Your linear regression model fit the data well, and you were able to sample something from it that was on the manifold the training data also lived on. That's all that's going on. Just in higher dimensions.

4

NotDoingResearch2 t1_j0d1dgb wrote

I feel like our modern day education system has somehow made us unable to tell the difference between models and reality.

2

Hyper1on t1_j0e7dv0 wrote

Look at Algorithm Distillation, you can clearly do RL in-context in LLMs. The point of this discussion is that "being asked to sample the next token" can, if sufficiently optimized, encompass a wide variety of behaviours and understanding of concepts, so saying that it's just a static LLM seems to be missing the point. And yes, it's just correlations all the way down. But why should this preclude understanding or awareness of the problem domain?

2

jms4607 t1_j0ckrj0 wrote

Ur only able to sample something from the manifold you have been trained on.

1

ReginaldIII t1_j0cl6lg wrote

That's not really true because because both under- and over-fitting can happen.

And it doesn't reinforce your assertion that ChatGPT has awareness or intent.

1

jms4607 t1_j0cqu0a wrote

I’d argue that if ChatGPT was fine tuned in RL based off of the responses of a human, for example, if it’s goal as a debater ai was to make humans less confident of their belief by responding in contrary in a conversation, than it arguably has awareness of intent. Is this not possible in the training scheme of ChatGPT? I looked into how they use RL right now, and I agree it is just fine-tuning human-like responses, but I think a different reward function could illicit awareness of intent.

1

ReginaldIII t1_j0cuujj wrote

It mimics statistical trends from the training data. It uses embeddings that make related semantics and concepts near to one another, and unrelated ones far from one another. Therefore, when it regurgitates structures and logical templates that were observed in the training data it is able to project other similar concepts and semantics into those structures, making them look convincingly like entirely novel and intentional responses.

1

jms4607 t1_j0cva57 wrote

I don’t think we know enough about the human brain to say we aren’t doing something very similar ourselves. 90% at least of human brain development has been to optimize E[agents with my dna in future]. Our brains are basically embedding our sensory input into a compressed latent internal state, then sampling actions to optimize some objective.

1

ReginaldIII t1_j0cxciw wrote

That we have the ability to project concepts into the scaffold of other concepts? Imagine a puppy wearing a sailor hat. Yup we definitely can do that.

f(x) = 2x

I can put x=1 in, I can put x=2 but if I don't put anything in then it just exists as a mathematical construct and it doesn't sit their pondering its own existence or the nature of what x even is. "I mean, why 2x ?!"

If I write an equation c(Φ,ω) =(Φ ω Φ)do you zoomorphise it because it looks like a cat?

What about this function which plots out Simba. Is it aware of how cute it is?

x(t) = ((-1/12 sin(3/2 - 49 t) - 1/4 sin(19/13 - 44 t) - 1/7 sin(37/25 - 39 t) - 3/10 sin(20/13 - 32 t) - 5/16 sin(23/15 - 27 t) - 1/7 sin(11/7 - 25 t) - 7/4 sin(14/9 - 18 t) - 5/3 sin(14/9 - 6 t) - 31/10 sin(11/7 - 3 t) - 39/4 sin(11/7 - t) + 6/5 sin(2 t + 47/10) + 34/11 sin(4 t + 19/12) + 83/10 sin(5 t + 19/12) + 13/3 sin(7 t + 19/12) + 94/13 sin(8 t + 8/5) + 19/8 sin(9 t + 19/12) + 9/10 sin(10 t + 61/13) + 13/6 sin(11 t + 13/8) + 23/9 sin(12 t + 33/7) + 2/9 sin(13 t + 37/8) + 4/9 sin(14 t + 19/11) + 37/16 sin(15 t + 8/5) + 7/9 sin(16 t + 5/3) + 2/11 sin(17 t + 47/10) + 3/4 sin(19 t + 5/3) + 1/20 sin(20 t + 24/11) + 11/10 sin(21 t + 21/13) + 1/5 sin(22 t + 22/13) + 2/11 sin(23 t + 11/7) + 3/11 sin(24 t + 22/13) + 1/9 sin(26 t + 17/9) + 1/63 sin(28 t + 43/13) + 3/10 sin(29 t + 23/14) + 1/45 sin(30 t + 45/23) + 1/7 sin(31 t + 5/3) + 3/7 sin(33 t + 5/3) + 1/23 sin(34 t + 9/2) + 1/6 sin(35 t + 8/5) + 1/7 sin(36 t + 7/4) + 1/10 sin(37 t + 8/5) + 1/6 sin(38 t + 16/9) + 1/28 sin(40 t + 4) + 1/41 sin(41 t + 31/7) + 1/37 sin(42 t + 25/6) + 3/14 sin(43 t + 12/7) + 2/7 sin(45 t + 22/13) + 1/9 sin(46 t + 17/10) + 1/26 sin(47 t + 12/7) + 1/23 sin(48 t + 58/13) - 55/4) θ(111 π - t) θ(t - 107 π) + (-1/5 sin(25/17 - 43 t) - 1/42 sin(1/38 - 41 t) - 1/9 sin(17/11 - 37 t) - 1/5 sin(4/3 - 25 t) - 10/9 sin(17/11 - 19 t) - 1/6 sin(20/19 - 17 t) - 161/17 sin(14/9 - 2 t) + 34/9 sin(t + 11/7) + 78/7 sin(3 t + 8/5) + 494/11 sin(4 t + 33/7) + 15/4 sin(5 t + 51/11) + 9/4 sin(6 t + 47/10) + 123/19 sin(7 t + 33/7) + 49/24 sin(8 t + 8/5) + 32/19 sin(9 t + 17/11) + 55/18 sin(10 t + 17/11) + 16/5 sin(11 t + 29/19) + 4 sin(12 t + 14/9) + 77/19 sin(13 t + 61/13) + 29/12 sin(14 t + 14/3) + 13/7 sin(15 t + 29/19) + 13/4 sin(16 t + 23/15) ...

1

jms4607 t1_j0d65c3 wrote

  1. Projecting can be interpolation, which these models are capable of. There are a handful of image/text models that can imagine/project an image of a puppy wearing a sailor hat.

  2. All you need to do is have continuous sensory input in your RL environment/include cost or delay of thought in actions, which is something that has been implemented in research to resolve your f(x) = 2x issue.

  3. The Cat example is only ridiculous because it obviously isn’t a cat. If we can’t reasonably prove that it is or isn’t a cat, then asking whether it is a cat or not is not a question worth considering. Similar idea goes for the question “is ChatGPT capturing some aspect of human cognition”. If we can’t prove that our brains work in a functionally different way that can’t be approximated to arbitrary degree by a ML model, then it isn’t something worth arguing ab. I don’t think we know enough ab neuroscience to state we aren’t just doing latent interpolation to optimize some objective.

  4. The simba is only cute because you think it is cute. If we trained an accompanying text model for the simba function, where it was given the training data “you are cute” in different forms, it would probably respond yes if asked if it was cute. GPT-3 or ChatGPT can refer and make statements ab itself.

At least agree that evolution on earth and human actions are nothing but a MARL POMDP environment.

1

red75prime t1_j0i1n56 wrote

> linear regression model

Where is that coming from? LLMs are not LRMs. LRM will not be able to learn theory of mind, which LLMs seem to be able to do. Can you guarantee that no modelling of intent is happening inside LLMs?

> Just in higher dimensions.

Haha. A picture is just a number, but in higher dimensions. And our world is just a point in enormously high-dimensional state space.

1

ReginaldIII t1_j0i67uc wrote

Linear regression / logistic regression is all just curve fitting.

> A picture is just a number, but in higher dimensions.

Yes... It literally is. A 10x10 RGB 24bpp image is just a point in the 100 dimensional hypercube bounded by 0-255 with 256 discrete steps. In each 10x10 spatial location there are 256^3 == 2^24 possible colours, meaning there are 256^3^100 possible images in that entire domain. Any one image you can come up with or randomly generate is a unique point in that space.

I'm not sure what you are trying to argue...

When a GAN is trained to map between points on some input manifold (a 512 dimensional unit hypersphere) to points on some output manifold (natural looking images of cats embedded within the 256x256x3 dimensional space bounded between 0-255 and discretized into 256 distinct intensity values) then yes -- the GAN has mapped a projection from one high dimensional manifold to a point on another.

It is quite literally just a bijective function.

1

red75prime t1_j0i966c wrote

"Just a" seems very misplaced when we are talking about not-linear transformations in million-dimensional spaces. Like arguing that an asteroid is just a big rock.

0

ReginaldIII t1_j0i9imv wrote

That you have come to that conclusion is ultimately a failing of the primary education system.

Its late. Im tired. And I dont have to argue about this. Good night.

1

red75prime t1_j0iay49 wrote

Good night. Happy multidimensional transformations that your brain will perform in sleep mode.

1