Submitted by Frumpagumpus t3_10vbrgg in MachineLearning

Physical world we live in has 4 dimensions, string theory posits like up to 10. It seems like in order to successfully model the abstract space of ideas which relates things in the physical world to each other and describes them, machine learning needs thousands of dimensions. Also to the extent that ML algos/matrices can be made sparse, that seems to me to tell us something about the density of the mapping between abstract space and physical space... anyone know any papers w/this line of thinking?

It also seems a bit unintuitive to me because it seems like geometrically space gets exponentially more complicated as you add dimensions but ML scales linearly or better in many cases with matrix dimensionality.

0

Comments

You must log in or register to comment.

Sharchimedes t1_j7gh721 wrote

It’s just math and a lot of guessing, so not really.

8

Frumpagumpus OP t1_j7gi0l1 wrote

one persons guessing is another's monte carlo technique perhaps? (also i don't understand why the downvotes)

2

Cogwheel t1_j7h3xi6 wrote

> (also i don't understand why the downvotes)

I will never understand Reddit's downvote behavior. It's clearly not just bots... It seems some people just can't stand honest curiosity, not already knowing what they know, etc.

8

cede0n t1_j7h5977 wrote

You almost get pity upvoted for talking about downvotes then get downvoted for the lols / controversy. Naturally talking about this gets you a downvote but now Ive said that...

4

jcinterrante t1_j7guik1 wrote

Check out the UChicago Knowledge Lab. This sounds generally related to what James Evans is working on. His work is more narrowly targeted than what you’re taking about because its focused on the generation of ideas in academic settings. But its still a good starting place for you.

It also sounds like it could be related to some of the work coming out of the Santa Fe Institute. But I don’t have any specific papers in mind.

7

Frumpagumpus OP t1_j7gzws1 wrote

thx for the recommendations, always fun to read research that appeals to your personal flavor of intuition!

1

Ok_Listen_2336 t1_j7ie52h wrote

All models are wrong, some are just useful.

I don't draw any association to the complexity of nature from the complexity of the latent model that scientists use to research nature.

4

junetwentyfirst2020 t1_j7ivn59 wrote

I agree. It’s also important to remember that the brain is just the architecture definition and the mind the model. The ML models and the mind model are unrelated, however.

1

Red-Portal t1_j7ixd0h wrote

High dimensionality does not necessarily mean more complex. In fact, it has been known for quite a while that going to higher dimensions makes various problems easier; non-linearly separable datasets suddenly become separable in higher dimensions for example. Turning this to 11, you basically get kernel machines. Kernels embed the data into potentially infinite dimensional spaces, and that has been very successful before deep learning took over.

3

scraper01 t1_j7glxn5 wrote

The deep differential model ML uses, are probably not optimal. Think this question would be more interesting, if we had a replica of the algorithm bio neural networks use.

1

cede0n t1_j7h604x wrote

I have had similar toilet-thoughts to this. Its also interesting to me that we are operating in fixed floating point precision and are roughly approximating patterns which tells me the high dimensionality seems to help map complexity with less prescision than is needed otherwise?

1

Acceptable-Fudge-816 t1_j7kpbkm wrote

The real world also can have thousands of dimensions. Time, color, hatred tension in the room, air current, and anything you can possible attribute to a position/thing.

At the end of the day it's just words, and their meaning depends on agreements. When we speak of the 3 dimensions, we mean the 3 dimensions of the physical world that we decided to define with 3 coordinates that help us known the position of something. Might as well have used complex numbers and keep it to 2 coordinates, or decided time should be included as part of the concept of position. So when you talk about "dimensions" in general, it may as well mean anything.

1

mskogly t1_j7jevm7 wrote

I have a theory that human imagination/creativity is linked to our dreams, and that we learn and change faster because our different brain halves play off scenarious to each other to test them out. our internal dreamworld can suspend and jump over the limitations of the physical world (like time, place, senses), but still manage to improve how we understand and interact with the world when awake. I think a better understanding of the human brain and especially dreams is needed for the next big leap in machine learning, instead of the brute force techniques used now to train static models.

0