PrivateFrank
PrivateFrank t1_j8h65qx wrote
Reply to comment by Martholomeow in Fighting Climate Change Was Costly. Now It’s Profitable. by dolphins3
"thanks Obama"
PrivateFrank t1_ivplvi0 wrote
Reply to comment by Nameless1995 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
Hey I'm not an ML guy, just someone with an interest in philosophy of mind.
Intentionality and understanding and first-person (phenomonologistic) concepts, and I think that's enough to have the discussion. We know what it is like to understand something or have intentionality. Intentionality in particular is a word made up to capture a flavour of first-person experience of having thoughts which are about something.
I think that to have "understanding" absolutely requires phenomenal consciousness. Or the "understanding" in an AI has could be the same as how much a piece of paper understands the words written upon it. At the same time, none of the ink on that page is about anything - it just is. There's no intentionality there.
It's important to acknowledge the context at the time that there were quite a few psychologists, philosophers and computer scientists who really were suggesting that the human mind/brain was just passively transforming information like the man in the Chinese room. It's important to not let current ML theorists make the same mistake (IMO).
The difference between the CRA and what we can objectively observe about organic consciousness is informative about where the explanatory gaps are.
PrivateFrank t1_ivpfnm2 wrote
Reply to comment by Nameless1995 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
Then this instant-time version of the CRA doesn't need understanding.
But you have to compare that to a human for the analogy to mean anything, and an instant-time human being is as empty as the CRA of understanding and intentionality.
PrivateFrank t1_ivpaaxh wrote
Reply to comment by Nameless1995 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
>Yes by following more rules (rules of updating other rules).
But those rules are about improving performance of the translation according to some other benchmark from outside of the rule system.
Unless one of the Chinese symbols sent into the room means "well done that last choice was good, do it again, maybe" and is understood to mean something like that, no useful learning or adaptation can happen.
PrivateFrank t1_ivp8cak wrote
Reply to comment by Thorusss in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
> The Chinese room is a boring flawed argument, that only is considered relevant by people who get tricked into confusing parts of the system with the whole thing.
Are your fingers part of the system, or your corneas? Once you claim the "whole system does X", you need to say what is and is not part of that system.
Chalmers' "extended mind" suggests that "the system of you" can also include your tools and technologies, and other people and entire societies.
PrivateFrank t1_ivp7x8b wrote
Reply to comment by Nameless1995 in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
>Newer one's are still following rules. It's still logic gates and bit manipulation underneath.
Yeah, but at the same time the translation "logic" is being continuously refined through learning.
The book of rules is static in the old example.
PrivateFrank t1_ivp7i5c wrote
Reply to comment by PassionatePossum in [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
>Right from the start, it assumes that there is a difference between „merely following rules“ and „true intelligence“.
It depends on how flexible those rules are, right? Are the rules a one to one lookup, or are there branching paths with different outcomes?
If the man in the room sees an incoming symbol, looks it up in the book, and sees only one possible output symbol, and sends that out, then he doesn't need to understand Chinese.
If he has more than one option of output, and needs to monitor the results of his output choices, then he's no longer just a symbol translator. He's now an active participant in shaping the incoming information. To get better at choosing symbols, he's going to have to learn Chinese!
PrivateFrank t1_iv07gpk wrote
Reply to comment by Aros5 in How to have better arguments by fchung
First task is to ask them questions and let them answer them. People need to feel like you understand their position, whether it's emotional or reasoned or a mixture of the two before they will entertain the idea that your opinion is worth listening to.
PrivateFrank t1_jaeiese wrote
Reply to comment by ThomasHL in Britain breaks 'green grid' record with latest 100 per cent clean power milestone by m_Pony
>And the final component is the UK grid system pays every electricity producer the price of the most expensive energy producer. If 1% of the grid is gas, 100% of the grid pays gas prices. Even on this one day, there was a gas power plant running as a back up (it just wasn't used).
>That last one is part of why very few UK homes have electricity based heating systems. There will never be a time when electricity costs less than gas, so gas has been the cheaper option.
Iirc all of the European energy market works like that.
And it wasn't 25 consecutive hours, either...