Submitted by flexaplext t3_123r90m in singularity
Imagine turning all your senses off. Your sight, hearing, smell, touch, every sensation in your body. So someone could scream at you or prod you or even drill a hole into your stomach and you wouldn't notice.
Then, entirely turn off any visual imagination. This is easy for me to do as I have aphantasia.
Doing that, what are you then left with? Still something in your head, an experience that is very much just as conscious and aware as without any other senses. But it basically is then just sounds, a voice in your head. Language.
Literally, almost entirely just through use of language, you can have complete conscious experience and conscious thought processing. That's why I believe the potential future ability of LLMs are greatly underestimated by some people.
Have you ever stopped and probed your thoughts? Like really deeply probed them to try and work out what's happening? How thoughts and ideas are being constructed inside of your head?
Take any random topic and start thinking deeply about it. Or just think up a response to this idea I'm writing right now and you've just read. Pay very close attention to what's happening inside your thought process.
Have you noticed how you're able to start responding to it and coming up with a thought but you have no idea where your thought process is actually going? You can have no idea what the entire sentence is when you first start responding, or what you're even going to say / think next after the first few words. Try it again but pay even closer attention to it.
It's like you're just coming up with one word after another, you know, like an LLM does. But not, necessarily, one exact word after another.
In that sentence I just thought about there, I was paying particular attention to how I was constructing the thought:
"But not" initially came into thought, then "necessarily" popped in and finally "one exact word after another".
It is more like thinking in 'concepts' that you string together rather than individual words. "One word after another" is an entire concept and a saying that we recognise, know and understand and can utilize into an idea / thought.
Deconstructing that sentence more, the "But not" part came into my mind before the "one word after another" part was in my working thoughts. Which means I potentially knew I wanted to add nuance and counterpoint to that statement I just made before I even knew what that counterpoint was. Basically, my subconsciously recognising that what I just said wasn't exactly correct and then responding in thought to this realization with a "But not". And only then, after that thought, working through why it wasn't exactly correct and realizing that it's because you can also think in chunks of words 'concepts' all at once.
Here is where the subconscious comes into play and is a huge (I'm going to argue the most important part) of our conscious experience. Sticking still to just the concept of thought construction (so blocking out all sensation and sensory input still). It feels to me as though my subconsciousness is like this computer that is always running in the background and just simply analyzing everything I've thought about up to that point.
It's like it's taking all the words (ideas) that I've thought about recently (like most of what's written down in this very post I'm writing out right now) and then checking them against my memories, these being relevant ideas I've read about or come up with myself previously and stored away in memory. The subconscious then recognises which ideas in my memory match against what I'm currently thinking about and considering, it evaluates what's most important and relevant, and then feeds this back into my conscious thought process.
But it literally is like it is feeding ideas into your head. Again, these ideas can be in the form of just singular words like "but" that recognise there's potential error or nuance, small concepts like "one word after another" or even very large concepts like "LLMs are similar to this".
I use that as an example because that's what thought (idea) popped into my head when writing the previous paragraph. I know that I recognize there's definitely a connection there and I have some sort of outline of what that connection is, but I don't yet know exactly what exactly that connection is and what ideas I am going to come up with based around that. I realize that the idea has important relevance, but that I'm going to have to deconstruct and rationalize that idea out in full within conscious working thought and further subconscious feedback to those thoughts.
The outline of the connection is that what I was talking about before is similar to what LLMs do. They read a certain amount of the text up to that point, to have context, and then use that to match against the model in order to best predict the next word.
I really don't think, in that sense, that they're doing something that much different to what our subconscious does for us. I believe, though, that a very key and important difference is that our brains are much better at compacting together and recognizing important ideas 'concepts'.
Within the context of what you've recently thought about, our subconscious isn't holding and analyzing against every single word. It's conceptualizing overarching ideas from that entire context and then matching against these. This allows our brains to be both highly efficient and also well honed / adapted towards idea creation, analysis and manipulation.
Within our memories we also store things in layered ideas 'concepts'. So again, it is much more efficient to match current context against these and it primes sparks of ideas and ingenuity.
I believe if we manage to refine LLMs to be able to capture ideas as well as we do and also store these in memory efficiently like we do, it will level up their capabilities and efficiency immensely. I think it is entirely possible to do this and I believe it's already being very much worked upon. If we do manage to get LLMs up to a similar level as us in that particular regard, I don't think there will actually be much left separating our mental abilities from those of LLMs in our actual thought process.
There is another part of the model that needs a lot of work though. Humans have an ability to connect, incredibly well, which ideas are most relevant and important to something. This will likely be the most tricky part, I believe, to get working well in LLMs and still needs a lot of work and revelations in order to get right. I foresee this remaining the most difficult part to crack.
If this is solved, I believe it would come along with impeccable accuracy, true idea creation, an ability to oversee tasks. I consider this to be the vital piece of the puzzle that's left. It is important to note that I see no reason such an ability can't be worked into near existing LLM architecture though. I think we can get LLMs right up to near our abilities. As I have tried to outline, I don't believe we actually think much differently from them. We're just still considerably better at some key parts of the process.
Have you ever questioned: "how do I know that 22 + 43 is equal to 65"?
Like really question at the deepest level how you know that. Not only that, but how do you know that, for absolute sure, there's complete and utter certainty in your understanding of that equation. Your subconscious is able to see that equation and match it perfectly against the relevant ideas from within your memory.
It is because we are able, with absolute precision, to pick out the most important concepts pertaining to a question that we can arrive at a completely accurate working solution. This is not something our LLMs can do very well yet. This is part of the task I am describing. And I think it's very much highly underestimated what capability this will truly unlock within LLMs if we manage to get this to work well in them. I will repeat again that we should not be expecting something far at all off our own capability if just this single code is cracked.
aalluubbaa t1_jdwoyv7 wrote
It’s so weird that while I was reading your post, I really didn’t visualize anything but words itself. I was shocked that I just skim thru in totality and got what you meant. The weird thing is that I really do not remember the specific of how you get to your conclusion but a general idea.
It was as if I didn’t even pay attention to the words in your post but just sort of looked at them. You are right. We don’t need to visualize things to have thoughts.
I would say that when you see something specific and identifiable. Something which is an object, a thing, or a noun to describe real things. I can sort of picture what it is. But most of the time, I would argue that a higher dimension of reasoning, really just comes from the words and how they are combined.