rogert2 t1_j70zh4c wrote
Reply to comment by rogert2 in I finally think the concept of AGI is misleading, fueled by all the hype, and will never happen by ReExperienceUrSenses
Zooming back out to the larger argument: it seems like you're laboring under some variation of the picture theory of language, which holds that words have a metaphysical correspondence to physical facts, which you then couple with the assertion that even though we grasp that correspondence (and thus wield meaning via symbols), no computer ever could -- an assertion you support by pointing to several facts about the physicality of human experience that it turns out are not categorically unavailable to computers or are demonstrably not components of intelligence.
The picture theory of language was first proposed by super-famous philosopher Ludwig Wittgenstein in the truly sensational book Tractatus Logico-Philosophicus, which I think he wrote while he was a POW in WWI. Despite the book taking Europe by storm, he later completely rejected all of his own philosophy, replacing it instead with a new model that he described as a "language game".
I note this because, quite interestingly, your criticisms of language-models seems like a very natural application of Wittgenstein's language-game approach to current AI.
I find it hard to describe the language-game model clearly, because Wittgenstein utterly failed to articulate it well himself: Philosophical Investigations, the book in which he laid it all out, is almost literally an assemblage of disconnected post-it notes that he was still organizing when he died, and they basically shoveled it out the door in that form for the sake of posterity. That said, it's filled with startling insight. (I'm just a little butt-hurt that it's such a needlessly difficult work to tackle.)
The quote from that book which comes to my mind immediately when I look at the current state of these language model AIs, and when I read your larger criticisms, is this:
> philosophical problems arise when language goes on holiday
By which he means something like: "communication breaks down when words are used outside their proper context."
And that's what ChatGPT does: it shuffles words around, and it's pretty good at mimicking an understanding of grammar, but because it has no mind -- no understanding -- the shuffling is done without regard for the context that competent speakers depend on for conveying meaning. Every word that ChatGPT utters is "on holiday."
But: just because language-model systems don't qualify as true AGIs, that doesn't mean no such thing could ever exist. That's a stronger claim that requires much stronger proof, proof which I think cannot be recovered from the real shortcomings of language-model systems.
Still, as I said, I think your post is a good one. I've read a lot of published articles written by humans that didn't engage with the topic as well as I think you did. Keep at it.
ReExperienceUrSenses OP t1_j71z12h wrote
You all have to really have to go on a journey with me here. The mind FEELS computable but this is misleading.
Consider this: how much of your mind actually exists separate from the body. Im sure you have attempted a breakdown. You can start by removing control of your limbs. Still there. Then any sensation. Still there. Remove signals from your viscera like hunger. Mind is still there i guess. Now start removing everything from tour head and face. Sight sound taste. The rest of the sensations in your skin and any other motor control. Now you are a mind in a jar sensory depraved. You would say still in there though. But thats because you have a large corpus of experiences in your memory for thoughts to emerge from. Now try to imagine what you are if you NEVER had any of those experiences to draw from.
So to expand what i was getting at a bit further, when i say visceral experience i mean that all the coordinated activity going on in and around all the cells in your body IS the experience. You say processing doesn’t occur in the eye but that is the first place it does. The retina is multiple layers of neurons and is an extension of the brain, formed from the embryonic neural tissue. If you stretch it a bit further, at the molecular level, everything is an “extension” of the brain. If everything is then you can start to modularize the body in different ways. Now you can think of the brain as more the medium of coordination than the executive control. Your mind is the consensus of all the cells in your body.
The things I’ve been hypothesizing about in my studies of microbiology and neuroscience requires this bit of reconceptualizing these things, choosing a new frame of reference to see what you get.
You can think of neurons as both powerful individual organisms in their own right AND a neat trick: they can act in concert as if they were a single shared cytoplasm, but remain with separate membranes for speed and process isolation. Neurons need to quickly transmit signal and state from all parts of the body, so that, for instance, your feet are aware of whats going on with the hands and they can work together to acquire food to satisfy the stomach. This doesn’t work in a single shared cytoplasm with any speed and integrity at the scale of our bodies. Some microorganisms coordinate into shared cytoplasms, but our evolutionary line utilized differentiation to great affect.
Everyone makes the assumption that I’m saying humans are special. I’m really not. This applies to all life on this planet. CELLS are special, because the "computing power" is unmatched. Compare electronic relays vs vacuum tubes vs transistors. Can’t make a smartphone with vacuum tubes. Likewise, transistors are trounced by lipid membranes, carbohydrates, nucleic acids, and proteins among other things, in the same way. Computers shuffle voltage; we are “programmable” matter (as in, matter that can be shaped for purpose by automated processes, not that there are programs involved. Because there aren't). This is a pure substrate comparison, the degree of complexity makes all the difference, not just the presence of it. We are matter that decomposes and recomposes other matter. Computers are nowhere near that sophistication. Computers do not have the power to even simulate fractions of all that is going on in real time, because of rate limiting steps and combinatorial explosions that cause exponential time {O(n^2)} algorithmic complexity All you have to do is look up some of our attempts to see the engineering hurdles. Even if its logically possible from view of the abstract mathematical constructs, that doesn’t mean it can be implemented. Molecular activity at that scale is computationally intractable.
To go further, even if it is not computational intractable the problem still remains. How do you encode the things I've been talking about here. Really try to play this out in your mind. What even does just some pseudocode look like. Now look back at your pseudocode. How much heavy lifting is being done by the words. How many of these things can actually be implemented with a finite instruction set architecture. With Heisenberg’s uncertainty principle lurking about, how accurate are your models and algorithms of all this molecular machinery in action.
Surur t1_j71a7p0 wrote
> And that's what ChatGPT does: it shuffles words around, and it's pretty good at mimicking an understanding of grammar, but because it has no mind -- no understanding -- the shuffling is done without regard for the context that competent speakers depend on for conveying meaning. Every word that ChatGPT utters is "on holiday.
This is not true. AFAIK it has a 96 layer neural network with billions of parameters.
Viewing a single comment thread. View all comments