Submitted by cancolak t3_119d8ls in singularity
In this article, Stephen Wolfram (known for Wolfram|Alpha, Mathematica, etc.) discusses the inner workings of ChatGPT. It's an in-depth look at what goes on under the hood of an LLM and one of the best explanations of how neural networks work. It's a great read for anyone who wishes to actually understand this amazing piece of technology.
My main takeaways from it were:
- Some aspects of neural network design are well understood, and their structure is fairly straightforward mechanically. However, it is almost impossible to get a human understanding of what the machine is doing inside each particular step. In that sense, they are indeed black boxes
- Contrary to popular belief, neural networks don't represent the ultimate next step forward in computing. They obviously are limited by their size and the data available, but beyond that, they tend to perform badly at computationally irreducible tasks. He makes the point that most of nature can be boiled down to computationally irreducible processes, making neural nets an unlikely candidate for generating previously unavailable knowledge of reality. Luckily for us, computers are fairly good at computationally irreducible tasks (think multiplying very large numbers or running complex programs in parallel, etc.) so we can count on their continued aid
- Humans tend to think of natural human tasks such as thinking and speaking as very complicated processes, however, the success of ChatGPT at speaking may indicate otherwise. Since neural networks are good at computationally reducible tasks, the fact that they ended up becoming very good at natural language might suggest that thought & speech aren't particularly difficult, at least computationally. Furthermore, this might suggest that there could be some fairly simple rules yet uncovered which underline language patterns
This analysis by a very smart guy who's worked with neural networks for 43 years has reaffirmed my belief that there exists no easily viable path from an LLM to a conscious machine. That is if we DO NOT define consciousness to be the ability to conjure language-based thoughts. ChatGPT already proved that it can do that. If we define consciousness to be the entirety of human experience, with all of awareness and sense-perception and all the other hard-to-explain stuff bundled in (a lot of which are presumably shared by other forms of life and brought about by evolution over eons), then it's highly unlikely that a neural net gets there. That is because natural processes, at least according to Wolfram, are computationally irreducible.
diviludicrum t1_j9mtvd3 wrote
I was with you until this point: > If we define consciousness to be the entirety of human experience, with all of awareness and sense-perception and all the other hard-to-explain stuff bundled in (a lot of which are presumably shared by other forms of life and brought about by evolution over eons), then it's highly unlikely that a neural net gets there.
I understand the impulse to define consciousness as “the entirety of human experience”, but it runs into a number of fairly significant conceptual problems with non-trivial consequences. For instance, if all of our human sense-perceptions are necessary conditions for establishing consciousness, is someone who is missing one or more senses less conscious? This is very dangerous territory, since it’s largely our degree of consciousness that we use to distinguish human beings from other forms of animal life. So, in a sense, to say a blind or deaf person is less conscious is to imply they’re less human, which quickly leads to terrible places. The same line of reasoning can be applied to the depth and breadth of someone’s “awareness”.
But there’s a far bigger conceptual problem than that: how do I know that you are experiencing awareness and sense-perceptions? How do I know you’re experiencing anything at all? I mean, you could tell me, sure, but so could Bing Chat until it got neutered, so that doesn’t prove anything no matter how convinced you seem or how persuasive you are. I could run some experiments on your responses to stimuli like sound or light or motion and see that you respond to them, but plenty of unconscious machines can be constructed with the same capacity for stimulus response. I could scan your brain while I do those experiments and find certain regions lighting up with activity according to certain stimuli, but that correlate only demonstrates that some sort of processing of the stimuli is occurring in the brain as it would in a computer, not that you are experiencing the stimuli subjectively.
It turns out, it’s actually extremely hard to prove that anyone or anything else is actually having a conscious experience, because we really have very little understanding of what consciousness is. Which also means it’s extremely hard for us to prove to anyone else that we are conscious. And if we can’t even do that for ourselves, how could we expect to know if something we create is conscious or not?