Viewing a single comment thread. View all comments

red75prime t1_j0i1n56 wrote

> linear regression model

Where is that coming from? LLMs are not LRMs. LRM will not be able to learn theory of mind, which LLMs seem to be able to do. Can you guarantee that no modelling of intent is happening inside LLMs?

> Just in higher dimensions.

Haha. A picture is just a number, but in higher dimensions. And our world is just a point in enormously high-dimensional state space.

1

ReginaldIII t1_j0i67uc wrote

Linear regression / logistic regression is all just curve fitting.

> A picture is just a number, but in higher dimensions.

Yes... It literally is. A 10x10 RGB 24bpp image is just a point in the 100 dimensional hypercube bounded by 0-255 with 256 discrete steps. In each 10x10 spatial location there are 256^3 == 2^24 possible colours, meaning there are 256^3^100 possible images in that entire domain. Any one image you can come up with or randomly generate is a unique point in that space.

I'm not sure what you are trying to argue...

When a GAN is trained to map between points on some input manifold (a 512 dimensional unit hypersphere) to points on some output manifold (natural looking images of cats embedded within the 256x256x3 dimensional space bounded between 0-255 and discretized into 256 distinct intensity values) then yes -- the GAN has mapped a projection from one high dimensional manifold to a point on another.

It is quite literally just a bijective function.

1

red75prime t1_j0i966c wrote

"Just a" seems very misplaced when we are talking about not-linear transformations in million-dimensional spaces. Like arguing that an asteroid is just a big rock.

0

ReginaldIII t1_j0i9imv wrote

That you have come to that conclusion is ultimately a failing of the primary education system.

Its late. Im tired. And I dont have to argue about this. Good night.

1

red75prime t1_j0iay49 wrote

Good night. Happy multidimensional transformations that your brain will perform in sleep mode.

1