Viewing a single comment thread. View all comments

Czl2 t1_j8v60kl wrote

Reply to comment by CypherLH in Emerging Behaviour by SirDidymus

Visit Wikipedia or Britannica encyclopedia and compare what I told you against your understanding. I expect you will discover your understanding does not match what is generally accepted. Do you think these encyclopedias are both wrong?

Here is the gap in bold:

> As I pointed out before, if you accept its premise then you must accept that NOTHING is 'actually intelligent' unless you invoke something like the "vitalism" you referenced and claim humans have special magic that makes them...

The argument does not pertain to intelligence. To quote my last comment:

>> The argument says no matter how intelligent it seems a digital computer executing a program cannot have a "mind", "understanding", or "consciousness".

Do you see the gap? Your concept is "actually intelligent". The accepted concepts are: "mind", "understanding", or "consciousness" regardless of intelligence. A big difference, is it not?

1

CypherLH t1_j8vdxku wrote

I'll grant there is a gap there..... but it actually makes the whole thing _weaker_ than I was granting...cause I don't give a shit about whether an AI system is "conscious" or "understanding" or a "mind", those are BS meaningless mystical terms. What I care about is the practical demonstration of intelligence; what measurable intelligence does a system exhibit. I'll let priests and philosophers debate about whether its "really a mind" and how many angels can dance on the head of a pin while I use the AI to do fun or useful stuff.

1

Czl2 t1_j9030la wrote

> I’ll grant there is a gap there….. but it actually makes the whole thing weaker than I was granting…

What you described as the Chinese room argument is not the commonly accepted Chinese room “argument”. Your version was about “intelligence” the accepted version is about “conscious” / “understanding” / “mind” regardless how intelligent the machine is.

Whether the commonly accepted Chinese room argument is “weaker“ is difficult to judge due to the difference between them. I expect to judge whether a machine has “conscious” / “understanding” / “mind” will be harder than judging whether that machine is intelligent.

To judge intelligence there are objective tests. Are there objective tests to judge “consciousness” / “understanding” / “mind”? I suspect not.

> cause I don’t give a shit about whether an AI system is “conscious” or “understanding” or a “mind”, those are BS meaningless mystical terms.

For you they are “meaningless mystical terms”. For many others these are important aspects that they believe make humans “human”. They care about these things because these things determine how mechanical minds are viewed and treated by society.

When you construct an LLM today you are free to delete it. When you create a child however you are not free to “delete it”. If ever human minds are judged to be equaivalent to machine minds will machine minds come to be treated like human minds?

Will instead human minds come to be treated like machine minds which we are free to do with as we please (enslave / delete / ...)? When human minds come to be treated like machines will it make sense to care whether they suffer? To a machine what is suffering? Is your car “suffering” when check engine light is on? It is but a “status light” is it not?

> What I care about is the practical demonstration of intelligence; what measurable intelligence does a system exhibit. I’ll let priests and philosophers debate about whether its “really a mind” and how many angels can dance on the head of a pin while I use the AI to do fun or useful stuff.

I understand your attitude since I share it.

2