Submitted by seethehappymoron t3_11d0voy in philosophy
baileyroche t1_ja6rnca wrote
As far as I can tell, we haven’t been able to prove that brain complexity = consciousness. Meaning, there is more to consciousness than the complexity of a neural network.
Take a look at Donald Hoffman’s work regarding consciousness. He proposes that consciousness is the only fundamental part of reality, and all of our perception is a simplified tool created through evolution. “Fitness beats truth,” so to speak.
I disagree with the article. I don’t think our limbic system is necessary for consciousness. I’m fact, it’s incredibly rare, but some humans have been born without a limbic system and are still conscious. I also disagree that consciousness requires some external sensory input. First of all, the AI is getting input through text. And second of all, look at humans with “locked in syndrome,” they cannot feel, or speak, or interact with the world, but we know they are still conscious.
I do wonder if AI can become conscious. We don’t understand consciousness, but we seem to be able to create new consciousness in human beings. I don’t think a physical body with sensory inputs is necessary for consciousness, and if it is, then it’s just a matter of time.
TKAAZ t1_ja757mt wrote
How do you prove that any other human besides you in conscious?
Well, they will tell you and you believe that.
Now what if that thing is not a human?
Xavion251 t1_jabq7pa wrote
Well, also I share most of my DNA with other humans. They look roughly like me, act roughly like me, and biologically work the same as me.
So it's a far more reasonable, simple explanation that they are conscious just like I am. To a somewhat lesser degree, this can extend to higher animals as well.
But an AI that acts conscious still has some clear differences with me in how it works (and how it came to be). So I would place the odds significantly lower that they are really conscious and aren't just acting that way.
That said, I would still treat them as conscious to be on the safe side.
TKAAZ t1_jac5w7z wrote
You are literally a bunch of signals. So is an "AI" existing in a bunch of silicon. There is nothing (so far) precluding consciousness from existing in other types of signals other than our assumptions.
As for your arguments, it seems that you argue that "since other humans look like you they must be conscious", and you then conclude that this implies that "entities that do not look human are not conscious.".
I may agree with the first, but that does not entail the opposite direction, and hence it can not be used here. It's like saying "if it rains the street is wet" and then concluding "if the street is wet it rains".
Xavion251 t1_jac7mgs wrote
>You are literally a bunch of signals. So is an "AI" existing in a bunch of silicon.
Putting that (problematic IMO, as I'm a dualist) assumption aside and simply granting that it is true - human brains use different kinds of signals generated in different ways. Does that difference matter? Neither you or I can prove either way.
>As for your arguments, it seems that you argue that "since other humans look like you they must be conscious", and you then conclude that this implies that "entities that do not look human are not conscious.".
This is reductive. I'm not talking about superficial appearance. I wouldn't conclude that a picture of a human is conscious - for example.
But I would conclude that something that by all measures works, behaves, and looks (both inside and out, on every scale) like me probably is also conscious like me.
It would be rather contrived to suggest that in a world of 7 billion creatures like me (and billions more that are more roughly like me - animals), all of them except for me in particular just look and act conscious while I am truly conscious.
>I may agree with the first, but that does not entail the opposite direction, and hence it can not be used here. It's like saying "if it rains the street is wet" and then concluding "if the street is wet it rains".
No, because we can observe the street being wet for other reasons. We can't observe consciousness at all (aside from our own).
TKAAZ t1_jaclivj wrote
​
>Does that difference matter? Neither you or I can prove either way.
I did not say it did or did not, I am saying you can not preclude that it does, which is what the claim of the article OP is. It seems to me you are inadvertently agreeing with this. My main point was to refute OPs claim that
>As far as I can tell, we haven’t been able to prove that brain complexity = consciousness. Meaning, there is more to consciousness than the complexity of a neural network.
as their observation of a "lack of proof" does not imply the conclusion. Furthermore you mention
>No, because we can observe the street being wet for other reasons. We can't observe consciousness at all (aside from our own).
Again I think you misunderstand my point, my example was just an analogy as to why the the conclusion you arrive at is incorrect at a logical level. You claim that 1) you are conscious, and 2) "because others are look like you (subject to some likeness definition you decided upon), then they are likely to be conscious". Fine. However, this does not imply the conclusion you try to show, i.e. that 3) "Someone who is (likely to be) conscious must look like like me (subject to the likeness definition you decided upon)". This sort of reasoning is a fallacy at its core, and it is non-sequitor from the premise 1) and the assumption 2) at a logical level. You are basically claiming that it must rain because the street is wet. It's extremely common for people to make these mistakes, however, and unfortunately it makes discussing things quite difficult in general.
Yung-Split t1_ja6vn77 wrote
I'm going to need some sources on the "humans born without limbic system" thing
baileyroche t1_ja6w62y wrote
Search “Urbach-Wiethe disease.”
ErisWheel t1_ja851wd wrote
>Urbach-Wiethe disease.
You're misunderstanding the disease that you're referencing. The limbic system is a complex neurological system involving multiple regions of the brain working in concert to perform a variety of complex tasks including essential hormonal regulation for things like temperature and metabolism and modulation of fundamental drives like hunger and thirst, emotional regulation and memory formation and storage. It includes the hypothalamus and thalamus, hippocampus and amygdala. Total absence of the limbic system would be incompatible with life.
Urbach-Wiethe patients often show varying levels of calcification in the amygdala, which leads to a greater or lesser degree of corresponding cognitive impairment and "fearlessness" that is otherwise atypical in a person who does not have that kind of neurological damage. The limbic system is not "absent" in these patients. Rather, a portion of it is damaged and the subsequent function of that portion is impaired to some extent.
baileyroche t1_ja8kaqt wrote
Ok fair. It is not the entire limbic system that is gone in those patients.
ErisWheel t1_ja99svb wrote
Yeah, sorry if it seemed nit-picky, but I think these are important distinctions when we're talking about where consciousness comes from or the presence of what disparate elements might/might not be necessary conditions for it. Missing the entire limbic system and still having consciousness is almost certainly impossible without some sort of supernatural explanation of the later.
Similarly, with locked-in syndrome, I think there's some argument there about whether we really would know if those patients were conscious in the absence of some sort of external indicator. What does "consciousness" entail, and is it the same as "response to stimuli"? If they really can't "feel, speak or interact with the world" in any way, what is it exactly that serves as independent confirmation that they are actually conscious?
It's an interesting quandary when it comes to AI. I think this professor's argument falls pretty flat, at least the short summary of it that's being offered. He's saying things like "all information is equally valuable to AI" and "dopamine-driven energy leads to intention" which is somehow synonymous with "feeling" and therefore consciousness, but these points he's making aren't well-supported, so unless there's more that we're not seeing, the dismissal of consciousness in AI is pretty thin as presented.
In my opinion, it doesn't seem likely that what we currently know as AI would have something that could reasonably be called "consciousness", but a different reply above brought up an interesting point - when a series of increasingly nuanced pass/fail logical operations gets you to complex formulations that appear indistinguishable from thought, what is that exactly? It's hard to know how we would really separate that sort of "instantaneous operational output" from consciousness if it became sophisticated enough. And with an AI, just given how fast it could learn, it almost certainly would become that sophisticated, and incredibly quickly at that.
In a lot of ways, it doesn't seem all that different from arguments surrounding strong determinism in regards to free will. We really don't know how "rigid" our own conscious processes are, or how beholden they might be to small-scale neurochemical interactions that we're unable to observe or influence directly. If it turns out that our consciousness is emerging as something like "macro-level" awareness arising from strongly-determined neurochemical interactions, it's difficult to see how that sort of scenario is all that much different from an AI running billions of logical operations around a problem to arrive at an "answer" that could appear as nuanced and emotional as our conscious thoughts ever did. The definition of consciousness might have to be expanded, but I don't think it's a wild enough stretch to assume that it's "breathless panic" to wonder about it. I think we agree that the article isn't all that great.
Viewing a single comment thread. View all comments