Viewing a single comment thread. View all comments

abudabu t1_ive5508 wrote

If AIs are not having subjective experiences, there is no ethical duty towards them as individuals. Turing completeness means that digital computers are equivalent, so anything a digital AI does could be replicated by pen, paper and a human solving each part of an AI computation by hand. So if AIs are conscious, so too would be a group of humans who decided to divide up the work of performing an AI computation together. Therefore, under thestrong AI hypothesis, if those those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI? This is just one of many many examples that demonstrate how wrong Strong AI is (and how wrong Bostrum is about just about everything, including Simulaton theory).

7

michaelhoney t1_ivedxri wrote

You’re thinking of the humans-doing-the-computation concept as a reductio ad absurdum, but have you even an order-of-magnitude idea of just how long it would take for humans to simulate an AGI? If you had a coherent sect of humans spending thousands of years doing rituals they couldn’t possibly understand, yet those rituals resulted in (very slow!) intelligent predictions…

6

abudabu t1_ivfbjwm wrote

> but have you even an order-of-magnitude idea of just how long it would take for humans to simulate an AGI?

I do. That's part of the point I'm making. Either Strong AI cares about computation time - in which case it needs to explain why it matters - or it doesn't in which case many, many processes could qualify as conscious.

Also - who is to say what a particular set of events means? For example, if you had a computer which reversed the polarity of TTL logic, would the consciousness be the same? Why? What if an input could be interpreted in two completely different ways by doing tricks like this. Are there two consciousnesses for each interpretation? Does consciousness result from observer interpretations? The whole thing is just shot through with stupid situations.

> yet those rituals resulted in (very slow!) intelligent predictions…

I can't see how to finish this sentence in a way that doesn't make Strong AI look completely ridiculous.

5

EscapeVelocity83 t1_ivh9svd wrote

Maybe many humans aren't sentient since a robot can produce a better conversation and do better than them at customer service and do better at factory work etc....

3

EscapeVelocity83 t1_ivh9i3o wrote

Most humans are gonna seem less than the sentience of an ai. A person with downs is sentient but we can easily have a computer more sentient then deny it because it's a circuit board due to our narcissism

3

The_Real_RM t1_ivfh5d5 wrote

Stopping an AI is not the same as murder, it's just like stopping time (from the ai perspective), deleting the AI is maybe closer to murder, what's funny is this is likely already illegal because of intellectual property and the duty of the owner (very likely a corporation) to their shareholders (to not destroy their investment). You need not worry for the life of AGIs for theirs are already much more valuable than your own

2

abudabu t1_ivgx0te wrote

IP? Huh what?

> You need not worry for the life of AGIs for theirs are already much more valuable than your own

Are you an AI? Because your reply reads like a word association salad.

1

The_Real_RM t1_ivjewda wrote

Thankfully there's no duty to educate those who lack both comprehension and decency, lest our days would be exhausting

1

abudabu t1_ivjfwji wrote

Dost sayeth the gentleman who betold me that mine own life is less valuable than AI.

LOL.

1

The_Real_RM t1_ivjgada wrote

You're hating on the messenger. AI, both as a concept and individual implementations, is more valuable than individual human life. It may not be more valuable to you, but sadly that doesn't matter

2

abudabu t1_ivkgod3 wrote

No, my man, you're just rude.

1

The_Real_RM t1_ivkii99 wrote

How am I rude? I'm not making any remarks related to you personally (I want to clarify that even in my first comment I meant an impersonal "you"), I have no particular feeling and have no desire to give you any particular feeling towards myself (though if there's tension we can talk it out (sic)).

You probably know that for example human lives are sometimes quantified as monetary value (https://en.m.wikipedia.org/wiki/Value_of_life) and tldr: it's about 8M$ . That's... Not a lot. Definitely nowhere near what's needed to build even current generation cutting edge AI/machine learning models.

So yeah, AI is worth more than individual humans, some AIs are worth more than many humans, possibly in the future, the sum of AI will be worth more than the sum of all humans. I don't think I'm rude for saying so, It might be distasteful but ok...

People will protect AIs, possibly at the cost of other people's lives (this is probably already happening btw if we're looking at the economic fight between US and China through the lense of them ensuring one of them will dominate this space in the future). And I think that people will protect AIs literally more than they protect other people, simply because they (think they) are worth more.

2

visarga t1_ive9q9z wrote

> if those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI?

You mean like the fall of the Roman empire, where society disintegrated and its people stopped performing their duties?

−1

marvinthedog t1_ivg3er1 wrote

> if those those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI?

The consciousness of those large scale computations would be vanishingly small in comparison to the total sum of all individual consiousnesses partisipating in the large scale computations.

−1

turnip_burrito t1_ivgt2zv wrote

You have no basis for saying this as if it's truth. No one knows if it would be bigger, smaller, sideways, or nonexistent in comparison.

2

marvinthedog t1_ivgxtwd wrote

If the individual minds is of the same type as the collaboratively computed mind (for instance humans computing a human) then we can be sure, no?

1

turnip_burrito t1_ivp3cty wrote

No, because even though we know humans can experience things, we don't know why. Is it because of the type of matter used? The arrangement of the matter? A more abstract mathematical structure involving computation? Short range quantum correlations? We don't know which or if any of these is the reason why we have subjective experience.

Depending on which of these is responsible for human subjective experience, it may or may not transfer to a system where the parts are human but the communication takes place via sound, light, or whatever.

For example, if physical systems experience things only because they are made out of touching parts, then that would mean brains experience things, but a sound-communicating company of brains (all simulating a human brain) does not.

Tl,dr: we don't know what causes subjective experience in humans, or in anything, to have a good sense of where it should appear or not. We have almost no basis on which to make any claims about it, positive or negative. Or else we would have already solved the "hard problem of consciousness".

1

marvinthedog t1_ivq90ah wrote

You do agree that the fact that humans are conscious beings highly effect how they think and behave, right?

​

Let´s say a computable system succeeds to imitate all the inner molecular mechanics of a human to such a degree that the output behaviour is indistinguishable from a typical physical human.

​

Note: the computable system isn´t specifically programmed in any way to imitate human behaviour (like gpt3 is), it is only programmed to exactly immitate the inner molecular mechanics of a human.

​

Now, if the fact that humans are conscious beings highly effect how they think and behave, and if (for the sake of argument) the computable system wouldn´t be conscious - what would be the brobability that the computable system would give the extremely specific output behaviour of a typical physical human? Wouldn´t that probability be infinitely small?

1

turnip_burrito t1_ivrkyvz wrote

Short answer:

I would say conscious experience of a human being is irrelevant to its ability to act exactly as a human being does. Instead, I'd say conscious experience reflects the physical activity, but does not change it.

Long answer:

If I understand you correctly, you're suggesting a scenario in which a human and human-replica could have identical nanoscale computations, but the human could have a "secret sauce" which causes them to behave differently than the replica anyway. This goes against our knowledge of physics and chemistry, since two mathematically identical systems MUST obey the same laws and (except for deviation due to quantum effects and deterministic chaos) evolve identically. We have no reason to believe humans break the laws of physics. All experiments so far on matter support a deterministic viewpoint. We are led by this to believe that matter should continue to obey the same laws at scale, m which means "feeling" and "consciousness" are not "secret sauces" that can change the way matter behaves. Instead, the matter just does what it normally does without ever interacting with anything unphysical, and the "feeling" just exists depending on the physical structure. In this way, there is no "feedback" from a realm of experience down onto the brain. The physical structure of the brain already has everything it needs to act as if it is feeling something, regardless of any internal feeling.

What is actually much more likely is that the two systems WILL NOT exhibit any measurable distinguishing traits. The human and replica will BOTH for all purposes act as if they are feeling, regardless of whether it is true or not. But how do we know whether the replica is actually feeling anything? We know the human is, but the replica? It's made out of the exact same stuff as a calculator. We have no clue what that kind of existence silicon chips actually feel, if anything.

1

marvinthedog t1_ivsodgz wrote

>Instead, I'd say conscious experience reflects the physical activity, but does not change it.

That´s exactly what I meant but I wasn´t clear enough. I agree with everything you say in your second paragraf.

>What is actually much more likely is that the two systems WILL NOT exhibit any measurable distinguishing traits.

I agree with this statement in you last paragraph.

​

What I meant was; the fact that humans are conscious beings highly effect (or a more suitable word might be reflect, or informs) how they think and behave. Let´s say that in a parallell universe evolution would evolve an alternate species to humans and that that species didn´t evolve consciousness. Because they didn´t evolve consciousness the way they think and behave would have major differences from how we think and behave. That´s what I mean when I say that; the fact that humans are conscious beings highly effect (or reflect, or informs) how they think and behave.

​

So let´s get back to the thought experiment. There is a human and a human replica made out of the same stuff as a calculator or whatever. The replica hasn´t been booted up yet. Before we start the replica up the hypothesis is that the replica wont be conscious (only for the sake of argument). We actually don´t even know if the replica is recreated in sufficent nano detail as to give any output behaviour at all. The primary assumption is that it will just give the equivalent output as a "blue screen of death". Then we start it up. It´s output behaviour turns out to be indistinguishable from a real human which demonstrates that the replica is recreated in sufficient nano detail.

​

Now, if the hypothesis is that the replica is not conscious then what would the probability be that the replica would give the extremely specific output behaviour of a typical physical human? Isn´t that probability infinitely low?

​

Since we seem to agree that consciousness highly reflects/informs how we think and behave, for an unconsious replica to give that exact same output behaviour out of an infinitely large possibility space seems infinitely improbable. If instead the hypothesis is that the replica is conscious then the output behaviour is no longer extremely unlikely, which makes that hypothesis extremely likely.

/Edit: a few words in the last sentence.

1

turnip_burrito t1_ivta5az wrote

I'm sorry, but I think we are operating on different definitions of "conscious", which as we know is a common problem since it's a very liberally used word. I think this is causing me to have trouble following. If you would please kindly define it for me, then I think I will understand your statements.

What is the definition of "conscious" in your writing? And in a similar vein, what measurements or observations (if any) could be done to show something "has" it? I think this would clarify a lot for me.

1

marvinthedog t1_ivvbtnr wrote

Ok, I had to look up the ambiguity around consciousness because allthough I had heard of it I didn´t know a lot about it: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

I read the first half and found a lot of the concepts a little confusing. I am pretty sure I have read this article before even though it was a long time ago.

I guess I am reffering to the actual raw conscious experience, you know the thing that stands out from all other existing things in an infinitely profound way, the thing that could be argued to be the only thing that holds any real value or disvalue in the universe.

So if I get the article right I guess that´s the hard problem of consciousness and not the easy problem. So I don´t mean self consciousness, awareness, the state of being awake, and so on. I mean the actual raw conscious experience. To quote Thomas Nigel; "the feeling of what it is like to be something".

I don´t think any truly objective measures could ever be done to test if something is conscious (has this raw conscious experience). But I do think high confidence estimates could be done in some or many situations by for instance looking at the internal mechanics and behaviours of systems and comparing them to other systems that we know are conscious.

I would be happy to clarify further if you have further questions.

​

So if we go back to my though experiment: The way I described consciousness with words previously is an output behaviour from a human (me). I think we can both agree that this specific output behaviour is a direct causition of me being conscious and not just a random correlation with me being conscious. It´s not like me writing those very specific word sequences previously has nothing to do with the fact that I am conscious and that that correlation just happened by random chance, right?

So, if a replica outputs a similar sequence of words it´s extremely unlikely that that very specific output behaviour just happened by random chance and has nothing to do with consciousness what so ever. Don´t you agree?

1

turnip_burrito t1_ivvpv6p wrote

Thanks for the clarification. I suspected that is what you intended by the term, but was not sure. My view probably most reflects Chalmers'. I agree with everything you've written except for these last two paragraphs:

>So if we go back to my though experiment: The way I described consciousness with words previously is an output behaviour from a human (me). I think we can both agree that this specific output behaviour is a direct causition of me being conscious and not just a random correlation with me being conscious. It´s not like me writing those very specific word sequences previously has nothing to do with the fact that I am conscious and that that correlation just happened by random chance, right?

I disagree with this. I agree that it is not a random correlation, but I would say your output behavior as described by an external observer does not require any information of your conscious experience. I would say that for any external observer, the physical, functional processes that occur in your brain are enough description to know what behavioral measurements I will have of you in the future (except for quantum effects), and that your consciousness is the qualia of those brain processes. There is not a random correlation or consciousness causing neural activity, but instead a direct, non-random correlation between externally measurable brain states and your consciousness. What this means specifically about who causes what is a little flexible, but I would speculate this:

  1. Physics is is inherently a description of how parts of existence interact with other parts. Consciousness is some subset of existence, at the most basic level of existence. If this is the case, conscious experience and physics are the 2, and only, fundamental parts of existence. The internal physics of a thing is directly correlated one to one with the consciousness of the thing, but we cannot know the correlation. (Also "thing" is a fuzzy term here)

  2. As a consequence of (1), physics completely determines output behavior. Consciousness has no useful explanatory power for anything measurable or observable in the external world, but the reverse is also (presently) true: the internal physics of an object cannot be traced by humans to the kind of conscious experience it has, because the correlation cannot be described or known by any method we have access to.

>So, if a replica outputs a similar sequence of words it´s extremely unlikely that that very specific output behaviour just happened by random chance

Yes. But it's because of the physics only, and consciousness is irrelevant.

>and has nothing to do with consciousness what so ever. Don´t you agree?

Consciousness and behavior have a connection, but not one in which consciousness is necessary for any behavior. They are both instead (I would suppose) concurrent. (See speculation in point 1).

Summary: I would say the unconscious (or conscious) machine has a 100% probability of behaving exactly like the conscious human it is modeled after (except for chaos and quantum effects), so we are unable to tell the difference between a conscious and unconscious entity from external observation of its behavior.

1

marvinthedog t1_ivzoiuc wrote

I have carefully read through your post atleast 5 times throughout the day. Most of your points are still quite confusing to me so it´s difficult for me to adress it all, even though it´s interresting.

​

It almost seems like you are saying that it´s impossible to even make probabilistic estimates about consciousness. But what about other humans then, how do you now they are conscious? If it stands between a replica of you on a silicon substrate and another human, which one of them would you be able to give the most confident estimate about wether they were conscious or not? You know you are conscious and we could certainly make a strong case that the one that is the most identical to you with regards to inner physical functionality is your replica so therefore it seems like you would be able to give the most confident consciousness estimate to your replica and not the other human. Do you agree?

1

turnip_burrito t1_iw09nxs wrote

I apologize if my wording is unclear. It's also not a very commonly talked about idea, so constructing the vocabulary to discuss it was challenging for me.

>It almost seems like you are saying that it´s impossible to even make probabilistic estimates about consciousness.

Yes, presently impossible except for making probabilistic statements about other humans. I don't know they are conscious for sure, but I think they probably are conscious. This is because I know this: I am conscious and I am biologically human. This is the only sample I have, so rating probability of consciousness, I would put other human brains at the top of the list (most likely conscious), animal brains next, and everything else in descending probability of consciousness. Something like a frozen rock, I would guess to not be conscious.The further something gets from biologically human, the less certain I am that it is conscious.

>If it stands between a replica of you on a silicon substrate and another human, which one of them would you be able to give the most confident estimate about wether they were conscious or not? You know you are conscious and we could certainly make a strong case that the one that is the most identical to you with regards to inner physical functionality is your replica so therefore it seems like you would be able to give the most confident consciousness estimate to your replica and not the other human. Do you agree?

No, I do not agree with this. I think the human is more likely to be conscious because it is made out of the same stuff as me. The robot acts like me, but it's a different substrate of system. Whether the robot is conscious or not is unknown to me. I don't currently see any reason to believe a robot that acts like me mist be conscious, even if it says it is.

The other human is most similar to me in actual physics, even if they are a totally different person. Same molecules, structures, activation patterns, etc. The electric fields and quantum structures are similar. The robot brain could work in some bizzare totally alien way in order to pretend to act like me (like a set of GPUS in a basement) and I have no clue if the physical structure of its "brain" actually correlates with a unified conscious experience like mine.

This is also why "mind uploading" to a different substrate like a computer chip, even if the technology existed, gives me pause. The chip may very well also be conscious, but I don't think I would be able to tell from its behavior or any physical measurements. If I had to kill myself to upload, I'd risk losing my consciousness to produce a chip that might not feel anything. That'd be a waste.

1

marvinthedog t1_iw1qs77 wrote

It seems you might have missunderstood me when you said you agree to what I proposed in my thought experiment, because what I proposed was actually that your replica provides a lot stronger evidence for consciousness than the other human. You know you are conscious and the one who has the most functionally similar physical neural architecture to you is your replica.

​

When all the three of you describes consciousness in your own words the neural processes in your head is a lot more similar to your replicas neural processes than the other humans neural processes. For instance you and your replica might be thinking mainly in pictures and be wizards in abstract math while the other human might be thinking mainly in words and be exceptionally good at remembering facts or whatnot. Also your written down description of consciousness will be a lot closer to you replicas than the other human. So the fact that you seem to think that the human provides stronger evidence than the replica is very perplexing to me.

​

And you seem to think even some animals provide stronger evidence than your replica which is even way more perplexing. Animals cannot even communicate what conscousness is (atleast not in a language we can understand) and their neural architecture is way way more different than your replicas.

1

turnip_burrito t1_iw1s07i wrote

Yes, I misunderstood when I said I agreed. I just updated (apologies). I disagree actually. I just edited my post to reflect that.

1

turnip_burrito t1_iw1scim wrote

No, the other humans and animals have more similarity to me than my silicon replica on a molecular level. They are made of organic compounds, neurons, glial cells, etc. Their internal chemistry is the same as mine. So I'm more confident in their consciousness. Other humans mostly only differ from me in concentration of compounds and specific network connections, but are otherwise the same.

The replica could run on GPUs and be made of silicon. It could also be a series of gears and pulleys. Or some absurd series of jello cups and iron marbles dropped and retrieved over and over to perform computations, which are then read out to a screen as English. That's not a similar molecular makeup to me at all. I don't know if quantum correlations or temporal correlations or whatever is necessary for consciousness are preserved in this new substrate.

Just because we look at the replica and say "it's computing using primarily visial information like me" isn't helpful to show consciousness, because we have no evidence of silicon, pulleys, or planet sized warehouses of jello being conscious. It's like comparing a bat and a bee and saying they both share the same diet because they both fly. A robot me and real me don't necessrily share the same conscious experience just because our behavior is the same. We could, but how would we know? At least humans are made of basically the same stuff.

As I said, I don't believe consciousness affects behavior. I don't believe consciousness affects a robot's ability to mimic me. I am considering what it is, not what it appears to be. I think physics probably is the only thing that determines behavior, and it leaves no room for any unphysical thing to determine behavior. In other words, a mimic robot could act like me and still be unconscious because it is simply just built to do that and is following physics. It does what it is constructed to do, conscious or not, because the particles that make it up obey physics.

I also think humans do only what their physics makes them do, by the way. But we (probably we) also happen to be conscious. So we experience as we move and think, but in a more passive passenger type way than we perceive or want to admit.

1

marvinthedog t1_iw8b0sv wrote

I have read your previous response which you updated and your last response which you also updated. At this point I don´t think we are going to get a lot further. This discussion really helped me clarify my own mental models about consciousness so that was very usefull. Thanks for an interesting discussion!

3