Submitted by hackinthebochs t3_zhysa4 in philosophy
Comments
[deleted] t1_izpiw2v wrote
[removed]
[deleted] t1_izrvbra wrote
[removed]
BernardJOrtcutt t1_izt87db wrote
Your comment was removed for violating the following rule:
>Read the Post Before You Reply
>Read/watch/listen the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
Opus-the-Penguin t1_izorju2 wrote
Interesting. How would we test for it? Is there a way of telling the difference between something that has sentience and something that mimics it?
hackinthebochs OP t1_izp7r6g wrote
We can always imagine the behavioral/functional phenomena occurring without any corresponding phenomenal consciousness. So this question can never be settled by experiment. But we can develop a theory of consciousness and observe how well the system in question corresponds to the features our theory says correspond with consciousness. Barring any specific theory, we can ask in what ways are the system similar and different from systems we know that are conscious and whether the similarities or differences bear on the credibility of attributing conscious to the system.
Theory is all well and good, but in the end it will have little practical significance. People tend to be quick to attribute intention or minds to inanimate or random occurrences. Eventually the behavior of these systems will be so similar to humans that most people's sentience-attribution machinery will fire and we'll be forced to confront all the moral questions we have been putting off.
Opus-the-Penguin t1_izp9qo8 wrote
Nice succinct statement of the issue. There's a lot boiled down into those two paragraphs. Thank you.
electriceeeeeeeeeel t1_izw17ex wrote
Nope no difference, that's why we will come to accept them at face value as the same but many will hold the underlying value assumption that it is different because its parts are different. Still, others won't lean on that value assumption, and due to the lack of strong evidence the belief that they aren't sentient will likely erode over time.
[deleted] t1_izow3bp wrote
[deleted]
CaseyTS t1_izuuyp4 wrote
We just need to find a way to skip the intervening 10 years each time so that nothing gets in the way of all these amazing developments.
At least this guy talks about a roadmap...but yes I agree, I want to see that roadmap.
[deleted] t1_izophls wrote
[removed]
BernardJOrtcutt t1_izt83h7 wrote
Your comment was removed for violating the following rule:
>Argue your Position
>Opinions are not valuable here, arguments are! Comments that solely express musings, opinions, beliefs, or assertions without argument may be removed.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
[deleted] t1_izphh53 wrote
[removed]
JHogg11 t1_izpmfdu wrote
I find this very odd considering the fact that he coined the term, "the hard problem of consciousness."
hackinthebochs OP t1_izpq7vt wrote
The issue of how to explain consciousness is importantly different than whether an AI can be or is conscious. An explanation of consciousness will identify features of systems that determine their level of consciousness. The hard problem of consciousness places a limit on the kinds of explanations we can expect from just physical dynamics alone. But some theories of consciousness allow that physical or computational systems intrinsically carry the basic properties to support consciousness. For example, panpsychism says that the fundamental properties that support consciousness are found in all matter. This includes various mechanical and computational devices. And so there is no immediate contradiction in being anti-physicalist and also believing that certain computational systems will be conscious.
BBush1234 t1_izpre4h wrote
Saying that AI can be conscious when you can't explain how consciousness originates is basically just saying that you're willing to decide that it has achieved consciousness at a certain point of sophistication.
Technically, it requires a level of faith to believe that other people are conscious and this would be similar with the main difference being that there is no obvious physical comparison between you and the AI (like there is with other people).
ShitCelebrityChef t1_j09cowg wrote
So much of the AI discussion is wishful thinking. Or magical thinking to put it another way. Or poor thinking to put it yet another. It's not impossible that computers could be sentient, but an honest appraisal would have to say its more unlikely than likely. The brain is not a computer. Silicon is not carbon. Comparisons between consciousness and computing are metaphorical. It's an en vogue faith based belief without even the redeeming qualities of a belief in god.
There is nothing as stupid as very clever people.
ConsciousLiterature t1_izqbery wrote
>This includes various mechanical and computational devices.
It also includes rocks. It also includes electrons and neutrons and photons which never experience state change.
I would say it's a crazy theory but honestly it's so far away from being able to called a theory we need to make up a new word to describe it.
newyne t1_izr5kuo wrote
Also there's the idea that the particles are conscious without the system of AI being conscious. And again, what kind of "consciousness" are we talking about? Panpsychism assumes that sentience is ubiquitous, but sapience is still emergent.
Nameless1995 t1_izsgrc4 wrote
Chalmers himself ride in-between a form of information-dualism position and panpsychism/panprotopsychism. He tends to think any formal functional-organization of a relevant kind (no matter at which level of abstraction?) would have a corresponding consciousness (based on his dancing qualia/fading qualia thought experiments). So he find it plausible that artificial machines can be conscious.
InTheEndEntropyWins t1_izsfmdr wrote
Yep, I do find it a strange position to take. I think he even said something like he could imagine that consciousness could be computational in nature.
I personally think his views have evolved but since he is famous for the hard problem, he hasn't really been that explicit about how his views have changed.
CaseyTS t1_izuv208 wrote
It's very possible that he changed his outlook between coining the term and now
[deleted] t1_izs3dnu wrote
!remind me 10 years
MarkAmsterdamxxx t1_izs6twg wrote
If I read AI as made of silicon I disagree, but if it is AI from a synthesis of silicon and biological material I would give it a chance. But not within 10 years. Maybe a 100.
__corpse_ t1_izt2x9g wrote
If that really happens, what do u think, will it lead to our downfall or will it lead to our extraordinary evolution??
Personally, I would like to see it happening, I want to know to what limits our society can go, both emotionally and technologically.
CaseyTS t1_izuw4tc wrote
I'm gonna answer in terms of mass automation and machine intelligence instead of consciousness specifically. I think artificial consciousness is already a part of AI to a small extent, and will propel automation.
Whether mass AI automation helps or hurts people will, I think, depend almost entirely on how it is adopted, by whom, when, and for what. That's the story with technology: it's a crapshoot whether a new tech is adopted, whether it's useful or not. For instance, in England, they used gas lanterns instead of electric lanterns for quite a long time because that is what the infrastructure had been built to support, and it costs money to change - despite that elecrric lights take less labor, are safer, leave the air cleaner for the city's people, etc.
Likely, if artificial general intelligence becomes widespread, it'll be controlled by the people who own tech companies. Some of these people are beholden to morals and ethics, some are not. Who specifically ends up with some relevant patent may well shape how this technology develops. If someone who is interested in military and security gets a hold of this sort of tech, expect synthetic super-soldiers at first. If a philanthropist gets it, expect robots to do dangerous or humanitarian work. Those initial uses will probably shape how the technology develops in the future: people usually optimize technology for its determined uses.
Source: my ass and a Tech & Society class I took some years ago.
-off-white t1_izt3mu8 wrote
…wouldn’t that entitle an AI to have some sort of an ego? For them to understand emotions based off the internet, that would show more bad than good….I don’t think we will see a day where we have a true AI that has gained human ego, feelings, and creativity.
CaseyTS t1_izuwj9q wrote
I agree that imitating humans is a crapshoot. I think artificial consciousness and general intelligence is possible, even if it be well below the level of humans.
Yeah, maybe an AI could have some form of an ego or emotions. AI already demonstrates creativity. But your point is taken that it does NOT look like human feelings and creativity at this time.
CaseyTS t1_izuun4c wrote
My nitpick is that he shouldn't have put a specific probability number on this because he did not attempt to validate or verify it numerically. He has educated impressions and estimations about how the tech will develop, but as a physicist, I prickle at putting a number on something without quantitatively finding that number.
As for the actual subject matter: I think he's right. I actually think the consciousness problem is overblown. Subjective data (sensations, "what it's like to be a bat"), action planning, and executing actions - repeated frequently or continuously over a period of - is a good enough definition of consciousness for me. As such making a conscious general AI seems doable, and by my low standards, some probably exist already. I'd go so far as to say that the hardest part about making a human-like consciousness is not in creating a form of consciousness, but in generalizing its intelligence to the point where it can be used for multiple things (like humans are).
In other words, I think that making a toy model of consciousness that is either useless or only good for one thing (like chatting via text) is totally doable. I think making a consciousness with enough general intelligence that it looks like a human intelligence is incredibly difficult.
dasein88 t1_izvrn7i wrote
Nonsense. We don't even have a clear criteria for sentience.
BernardJOrtcutt t1_izp581d wrote
Please keep in mind our first commenting rule:
> Read the Post Before You Reply
> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
AgentSmith26 t1_j0aawuo wrote
Supposing sentience is defined as passing the Turing test, how did he actually calculate a probability of 20%?
hackinthebochs OP t1_j0aeh1d wrote
Probability in this context usually means credence, that is, subjective probability. It's a way to quantify your expectation of an event when you can't do a frequency analysis. So Chalmers claim should be understood as "I give 20% credence to AI sentience within 10 years".
AgentSmith26 t1_j0agzq5 wrote
Gracias for the answer - makes sense!
A quick question. Could I interpret Chalmers' statement as follows:
Out of 100 earth-like civilizations in our present technological state (2022), 20 developed AI in 10 years.
?
hackinthebochs OP t1_j0aj2im wrote
I think that's a good way to think about it. If we have a reasonably accurate understanding of the work remaining, then the credence is his expectation of how fast progress will proceed. The other relevant dimension is the accuracy of this understanding of how much is left to do. For example, is artificial sentience even possible at all? Is it a few technological innovations away, or very many?
AgentSmith26 t1_j0alxdt wrote
Muchas gracias kind person.
phroztbyt3 t1_j0ez4p9 wrote
I for one approve of our new AI overlords.
(This comment is here just in case so they don't kill me, then I can just live out my life in their lithium mines)
[deleted] t1_izotz4c wrote
[deleted]
[deleted] t1_izorjxz wrote
[removed]