Viewing a single comment thread. View all comments

rogert2 t1_j70zgil wrote

Good post. I do have responses to a few of your points.

You argue that the systems we're building will fail to be genuine intelligences because, at bottom, they are blindly manipulating symbols without true understanding. That's a good objection, just as valid in the ChatGPT era as it was when John Searle presented it as a thought experiment that has become known as "The Chinese Room argument":

> Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.

There's plenty of evidence to show that modern "AIs," which are just language models, are essentially the same as Searle's box (worse, even, because their instructions are noticeably imperfect). So, I think you're on solid ground to say that ChatGPT and other language models are not real intelligences, and furthermore that nothing which is just a language model could ever qualify.

But it's one thing to say "a language model will never achieve understanding," and quite another to say "it is impossible to create an artificial construct which has real understanding." And you do make that second, stronger claim.


Your argument is that the foundation that works for humans is not available to computers. I think the story you tell here is problematic.

You talk a bit about the detailed chain of physical processes that occur as sensory input reaches the human body, travels through the perceptual apparatus, and ultimately modifies the physical structure of the brain.

But, computers also undergo complex physical processes when stimulated, so "having a complex process occur" is not a categorical differentiator between humans and computers. I suspect that the processes which occur in humans are currently much more complex than those in computers, but we can and will be making our computers more complex, and presumably we will not stop until we succeed.

And, notably, a lot of the story you tell about physical processes is irrelevant.

What happens in my mind when I see something has very little to do with the rods and cones in my eyes, which is plain when we consider any of these things:

  • When I think about something I saw earlier, that process of reflection does not involve my eyeballs.
  • Color-blind people can learn, understand, and think about all the same things as someone with color-vision.
  • A person with normal sight who becomes blind later does not lose all their visual memories, the knowledge derived from those memories, or their ability to reflect on those things.

Knowledge and understanding occur in the brain and not in the perceptual apparatus. (I don't know much about muscle memory, but I'd wager that the hand muscles of a practiced pianist don't play a real part in understanding Rachmaninoff work. If any real pianists disagree on that point, PM me with your thoughts.)


So, turning our attention to just what happens in the brain, you say:

> The physical activity and shifting state ARE the result, no further interpretation necessary

I get what you're saying here: the adjustment that occurs within the physical brain is the learning. But you're overlooking the fact that this adjustment is itself an encoding of information, and is not the information itself.

It's important to note that there is no resemblance between the physical state of the brain and the knowledge content of the mind. This is a pretty famous topic in philosophy, where it's known as "the mind-body problem."

To put it crudely: we are quite certain that the mind depends on the brain, and so doing stuff to the brain will have effects on the mind, but we also know from experiment that the brain doesn't "hold" information the way a backpack "holds" books. The connection is not straightforward enough that we can inspect the content of a mind by inspecting the brain.

I understand the word "horse." But if you cut my brain open, you would not find a picture of a horse, or the word "horse" written on my gray matter. We can't "teach" somebody my email password by using surgery to reshape their brain like mine.

And that cuts both ways: when I think about horses, I have no access to whatever physical brain state underlies my understanding. In fact, since there aren't any nerve endings in the brain, and my brain is encased in my skull (which I have not sawed open), I have no direct access to my brain at all, despite being quite aware of at least some of the content of my mind.

So, yes, granted: AI based on real-world computing hardware would have to store information in a way that doesn't resemble the actual knowledge, but so do our brains. And not only is there no reason to suppose that intelligence resides in just one particular encoding mechanism, even if it did, there's no reason to suppose that we couldn't construct a "brain" device that uses that same special encoding: an organic brain-thing, but with five lobes, arranged differently to suit our purposes.


The underpinnings you highlight are also problematic.

I think this quote is representative:

> The base case, the MEANING comes from visceral experience.

One real objection to this is that lots of learning is not visceral at all. For example: I understand the term "genocide," but not because I experienced it first-hand.

Another objection is that the viscera of many learning experiences are essentially indistinguishable from each other. As an example: I learned different stuff in my Philosophy of Art class than I learned in my Classical Philosophy class, but the viscera of both consisted of listening to the exact same instructor lecturing, pointing at slides that were visually all-but-identical from each other, and reading texts that were printed on paper of the same color and with the same typeface, all in the exact same classroom.

If the viscera were the knowledge, then because the information in these two classes was so different, I would expect there to be at least some perceptible difference in the viscera.

And, a Spanish student who took the same class in Spain would gain the same understanding as I did, even though the specific sounds and slides and texts were different.

I think all of this undermines the argument that knowledge or understanding are inextricably bound up in the specifics of the sensory experience or the resulting chain reaction of microscopic events that occurs within an intelligent creature.

TO BE CONTINUED...

2

rogert2 t1_j70zh4c wrote

Zooming back out to the larger argument: it seems like you're laboring under some variation of the picture theory of language, which holds that words have a metaphysical correspondence to physical facts, which you then couple with the assertion that even though we grasp that correspondence (and thus wield meaning via symbols), no computer ever could -- an assertion you support by pointing to several facts about the physicality of human experience that it turns out are not categorically unavailable to computers or are demonstrably not components of intelligence.

The picture theory of language was first proposed by super-famous philosopher Ludwig Wittgenstein in the truly sensational book Tractatus Logico-Philosophicus, which I think he wrote while he was a POW in WWI. Despite the book taking Europe by storm, he later completely rejected all of his own philosophy, replacing it instead with a new model that he described as a "language game".

I note this because, quite interestingly, your criticisms of language-models seems like a very natural application of Wittgenstein's language-game approach to current AI.

I find it hard to describe the language-game model clearly, because Wittgenstein utterly failed to articulate it well himself: Philosophical Investigations, the book in which he laid it all out, is almost literally an assemblage of disconnected post-it notes that he was still organizing when he died, and they basically shoveled it out the door in that form for the sake of posterity. That said, it's filled with startling insight. (I'm just a little butt-hurt that it's such a needlessly difficult work to tackle.)

The quote from that book which comes to my mind immediately when I look at the current state of these language model AIs, and when I read your larger criticisms, is this:

> philosophical problems arise when language goes on holiday

By which he means something like: "communication breaks down when words are used outside their proper context."

And that's what ChatGPT does: it shuffles words around, and it's pretty good at mimicking an understanding of grammar, but because it has no mind -- no understanding -- the shuffling is done without regard for the context that competent speakers depend on for conveying meaning. Every word that ChatGPT utters is "on holiday."

But: just because language-model systems don't qualify as true AGIs, that doesn't mean no such thing could ever exist. That's a stronger claim that requires much stronger proof, proof which I think cannot be recovered from the real shortcomings of language-model systems.

Still, as I said, I think your post is a good one. I've read a lot of published articles written by humans that didn't engage with the topic as well as I think you did. Keep at it.

3

ReExperienceUrSenses OP t1_j71z12h wrote

You all have to really have to go on a journey with me here. The mind FEELS computable but this is misleading.

Consider this: how much of your mind actually exists separate from the body. Im sure you have attempted a breakdown. You can start by removing control of your limbs. Still there. Then any sensation. Still there. Remove signals from your viscera like hunger. Mind is still there i guess. Now start removing everything from tour head and face. Sight sound taste. The rest of the sensations in your skin and any other motor control. Now you are a mind in a jar sensory depraved. You would say still in there though. But thats because you have a large corpus of experiences in your memory for thoughts to emerge from. Now try to imagine what you are if you NEVER had any of those experiences to draw from.

So to expand what i was getting at a bit further, when i say visceral experience i mean that all the coordinated activity going on in and around all the cells in your body IS the experience. You say processing doesn’t occur in the eye but that is the first place it does. The retina is multiple layers of neurons and is an extension of the brain, formed from the embryonic neural tissue. If you stretch it a bit further, at the molecular level, everything is an “extension” of the brain. If everything is then you can start to modularize the body in different ways. Now you can think of the brain as more the medium of coordination than the executive control. Your mind is the consensus of all the cells in your body.

The things I’ve been hypothesizing about in my studies of microbiology and neuroscience requires this bit of reconceptualizing these things, choosing a new frame of reference to see what you get.

You can think of neurons as both powerful individual organisms in their own right AND a neat trick: they can act in concert as if they were a single shared cytoplasm, but remain with separate membranes for speed and process isolation. Neurons need to quickly transmit signal and state from all parts of the body, so that, for instance, your feet are aware of whats going on with the hands and they can work together to acquire food to satisfy the stomach. This doesn’t work in a single shared cytoplasm with any speed and integrity at the scale of our bodies. Some microorganisms coordinate into shared cytoplasms, but our evolutionary line utilized differentiation to great affect.

Everyone makes the assumption that I’m saying humans are special. I’m really not. This applies to all life on this planet. CELLS are special, because the "computing power" is unmatched. Compare electronic relays vs vacuum tubes vs transistors. Can’t make a smartphone with vacuum tubes. Likewise, transistors are trounced by lipid membranes, carbohydrates, nucleic acids, and proteins among other things, in the same way. Computers shuffle voltage; we are “programmable” matter (as in, matter that can be shaped for purpose by automated processes, not that there are programs involved. Because there aren't). This is a pure substrate comparison, the degree of complexity makes all the difference, not just the presence of it. We are matter that decomposes and recomposes other matter. Computers are nowhere near that sophistication. Computers do not have the power to even simulate fractions of all that is going on in real time, because of rate limiting steps and combinatorial explosions that cause exponential time {O(n^2)} algorithmic complexity All you have to do is look up some of our attempts to see the engineering hurdles. Even if its logically possible from view of the abstract mathematical constructs, that doesn’t mean it can be implemented. Molecular activity at that scale is computationally intractable.

To go further, even if it is not computational intractable the problem still remains. How do you encode the things I've been talking about here. Really try to play this out in your mind. What even does just some pseudocode look like. Now look back at your pseudocode. How much heavy lifting is being done by the words. How many of these things can actually be implemented with a finite instruction set architecture. With Heisenberg’s uncertainty principle lurking about, how accurate are your models and algorithms of all this molecular machinery in action.

2

Surur t1_j71a7p0 wrote

> And that's what ChatGPT does: it shuffles words around, and it's pretty good at mimicking an understanding of grammar, but because it has no mind -- no understanding -- the shuffling is done without regard for the context that competent speakers depend on for conveying meaning. Every word that ChatGPT utters is "on holiday.

This is not true. AFAIK it has a 96 layer neural network with billions of parameters.

1

ReExperienceUrSenses OP t1_j71vga7 wrote

I'll just give a quick reply to this point about "genocide" here, and then post the rest of my thoughts that you spurred (thanks!) in a reply/chain of replies to your last post in order to expand upon and better frame the position that i'm coming from.

So you know what genocide is because you make analogies from your experiences. You have experienced death. You’ve seen it smelled it touched it thought about it and felt emotions about it especially in relation to your own survival. You have experienced many different ways to categorize things and other people so you understand the concept of groups of humans. You can compose from these experiences the concept of murder, and expand that to genocide. You haven’t experienced nothingness, but you have experienced what it is to have something, and then NOT have that something. Language provides shortcuts and quick abstractions for mental processing. You can quickly invoke many many experiences with a single word.

−1