Viewing a single comment thread. View all comments

superluminary t1_j99hmc9 wrote

You missed the part where maybe we are just “language models”.

We have a short term memory like a 4000 character input buffer. We have long term memory, like a trained network. Each night we sleep and dream, and the dreams look a lot like Stable Diffusion (not a language model I know but it’s still a transformer network).

Obviously we have many more sensory inputs than an LLM and we can somehow do unsupervised learning from our own input data, but are we fundamentally different?

1

NoidoDev t1_j9baaz6 wrote

Ahm, no. We aren't just “language models”. This is just silly. I mean there's the NPC meme, but people are capable of not just putting out the response that makes most likely sense, without knowing what it means. That's certainly an option, but not the only thing we do.

We also have a personal life story and memories, models of the world, more input like visuals, etc.

1

superluminary t1_j9c8auj wrote

Certainly, we have additional input media, notably visual. We also appear to run a network training process every night based on whatever is in our short-term memory which gives us a "personal life story".

Beyond this though, what is there?

My internal dialogue appears to bubble up out of nowhere. It's presented to my consciousness in response to what I see and hear, i.e whatever is in my immediate input buffer, processed by my nightly trained neural network.

I struggle with the same classes of problems an LLM does. Teach me a new game, and I'll probably suck at it until I've practiced and slept on it a couple of times. This is pretty similar to loading it into a buffer and running a training step on the buffer data. Give me a tricky puzzle and the answer will float into my mind apparently from nowhere, just as it does for an LLM.

> Without knowing what it means

That's an assumption. We don't actually know how the black box gets the right words. We don't actually know how your neural network gets the right words.

0