Submitted by wtfcommittee t3_1041wol in singularity
ChronoPsyche t1_j33atzl wrote
Perception is necessary for sentience. ChatGPT does not have any perception. This is ridiculous.
myusernamehere1 t1_j34seyn wrote
Why? This is not the case. Would a human brain grown in a lab having no sensory connections not be conscious?
ChronoPsyche t1_j34vw1w wrote
Actually, I have no clue. We've never grown a human brain in a lab. It's impossible to say. That's kind of irrelevant though because we know that a human brain has the hardware necessary for sentience. We don't know that for ChatGPT and have no reason to believe it does.
And when I say perception, I don't just mean perception of the external environment, but perception of anything at all or be aware of anything at all. There is no mechanism by which ChatGPT can perceive anything, whether internal or external. Its only input is vectors of numbers that represent tokenized text. That's it.
Let's ask a better question, why would it be conscious? People think because it talks like a human, but that's just a trick. It's a human language imitator and that's all.
myusernamehere1 t1_j34yb7o wrote
Oh im not arguing that ChatGPT is conscious, i just dont think you have arrived at any meaningful reasons as to why it couldnt be concious. Whos to say that an "input of tokenized vectors of numbers that represent tokenized text" is unable to result in consciousness? Again i do not think ChatGPT is necessarily advanced enough to be considered sapient/sentient/conscious.
ChronoPsyche t1_j34yu96 wrote
> i just dont think you have arrived at any meaningful reasons that it couldnt be concious.
I don't need to arrive at meaningful reasons why it couldn't be conscious. The burden of proof is on the person making the extraordinary claim. OP's proof for it being conscious is "because it says it is".
Also, I'm not saying it can't be conscious as I can't prove a negative. I'm saying there's no reason to believe it is.
myusernamehere1 t1_j34zt78 wrote
True. And i agree for the most part. Yet you started with and provided other arguments for why you think it is not conscious, none of which hold up to scrutiny. I am just arguing against those claims.
ChronoPsyche t1_j350ri7 wrote
I mean they do hold up to scrutiny. We have no reason to think that a probability model that merely emulates human language and doesn't have any sensory modalities could be sentient.
That's not an airtight argument because again, I can't prove a negative, but the definition of sentience is "the capacity to experience feelings and sensations." and ChatGPT absolutely does not have that capacity, so there's no reason to think it is sentient.
myusernamehere1 t1_j352wrq wrote
Sentience is the ability to have "feelings". These do not have to be similar to the feelings us humans understand, they could be entirely alien to our experiential capabilities. The ability to interpret text prompts could be a sort of sensory modality. And id argue that way the human brain operates can be abstracted to a complex "probability model". It is very possible that consciousness itself is "simply" an emergent property of complex information processing.
Have you seen the paper where a researcher hooked up a rat brain organoid to a (in simple terms) brain chip, and taught it to fly a plane in a 3d simulated environment? Or, more recently, a human brain organoid was taught to play pong? These organoids had no ability to sense their environment either, and both may very well have some limited level of sentience/consciousness.
ChronoPsyche t1_j353a24 wrote
Nothing you're saying is relevant. Anything could be possible, but that isn't an argument against my claims. My keyboard could have strange alien sensory modalities that we don't understand. That doesn't make it likely.
myusernamehere1 t1_j354tls wrote
Well, i disagree with everything you just said and find the keyboard analogy humorously off-target. My argument is not "anything is possible."
ChronoPsyche t1_j355cjq wrote
What is your argument then? You haven't actually stated an argument, you've just told me mine is wrong.
myusernamehere1 t1_j355jto wrote
My argument is that your arguments arent valid lol
ChronoPsyche t1_j355vbk wrote
"I agree with your conclusion but I just thought id point out that your arguments are bad". Lol that's rather pedantic but okay. You do you.
myusernamehere1 t1_j356iy7 wrote
Well, i saw a bad argument (or a few) and i pointed it out and explained my reasoning. Not sure why thats a bad thing, i think it promotes educated discourse.
ChronoPsyche t1_j3575rb wrote
Fair enough.
Large-Hope-6429 t1_j35uakg wrote
This guy is definitely not in the club
2Punx2Furious t1_j34zjvb wrote
It does have some perception. Just because it doesn't have the full sensory capability that (most) humans have, doesn't mean it has none. Its only input is text, but it has it.
Also, for "sentience" only "self-perception" is really necessary, by definition, which yes, it looks like it doesn't have that. But I don't really care about sentience, "awareness" or "consciousness". I only care about intelligence and sapience, which it seems to have to some degree.
ChronoPsyche t1_j34zsvq wrote
>But I don't really care about sentience, "awareness" or "consciousness". I only care about intelligence and sapience, which it seems to have to some degree.
Okay, but this discussion is about sentience so that's not really relevant.
2Punx2Furious t1_j3501j9 wrote
Sure, I just wanted to point that out. Sentience is of relatively low importance/impact to an AGI. It doesn't need to feel things to understand them, or value them.
Viewing a single comment thread. View all comments