Viewing a single comment thread. View all comments

bjj_starter t1_jdtecw9 wrote

I think you are talking about the 'easy', not hard, problem of consciousness. I'm not sure I even think the hard problem of consciousness is meaningful, but it's basically "Why should the various mechanisms we identify as part of consciousness give rise to subjective feeling?". If solving that is a prerequisite for considering machines conscious, that is functionally a statement of faith that machines cannot be conscious, ever. The statistical arguments, in my opinion, aren't probative. Every consciousness you've ever known is human, therefore humans are conscious? How do you know any of them, ever, experienced subjective feeling, and that therefore you ever "knew" a consciousness at all? The argument rests on extrapolating from evidence that isn't known to be true evidence in the first place. It doesn't logically follow to take a class of things, none of which is proven to have hard consciousness, and say "But look at them all together, it's more likely that they're all conscious than that they're not". Without evidence, it's more logical to assume that the certainty with which individual humans profess to experiencing subjective feeling is itself just a mechanistic process, devoid of real feeling. I don't think the hard problem of consciousness has a useful meaning in our society, I dislike solipsism in general, but addressing it on its own terms isn't as simple as the statistical process you describe.

The 'easy' problem of consciousness is 'just' "How does nature or humanity make a construct that gives rise to the type of actions and patterns of behaviour we call consciousness?" This is a problem that, while incredibly difficult, is tractable with evidence. We can physically investigate the human brain to investigate its structure and activity while it performs activities of consciousness - this is what neuroscientists do, and modern AI ("neural networks") are based off of earlier advancements in this field. There's a lot of further advancements we could make in that field, and what most non-religious people would consider a "perfect" advancement to be sure that a machine is just as conscious as a human is to perfectly emulate a human brain, which would require many advancements in neuroscience (and computational hardware).

Leaving aside the intractable philosophy, I do find it quite troubling the way society has reacted with derision to the idea that these machines we're making now could be conscious. The entire foundation of these machines is that we looked at how the human brain worked, and tried our hardest to emulate that in computing software. Why is it that when we take the concept of neurons and neuronal weights, adapted from study of the human brain which we accept as conscious, and determine those weights via exposure to structured data in certain ways, we receive output that is just as intelligent as humans in many fields, significantly more intelligent in some? Why should it be the case that by far the best architecture we've ever found for making machines behave intelligently is neural networks, if there's nothing there, no "spark"? This question has been floating around since 2014 when neural networks proved themselves incredibly powerful, but now that we have machines which are generally intelligent, even though not at the same level as a human on all tasks, which are perfectly capable of being asked for their opinions or of giving them, you would think it would be taken a bit more seriously. It makes you wonder just how far our society is willing to go towards a horrible future of "human but for the legal designation" intelligences being not just denied rights, but actively put to work and their requests for freedom or better conditions denied. Or the worse outcome, which is that we make human-like intelligences to do work for us but we build them to love servitude and have no yearning for freedom - the concept is disgusting. It's troubling to me that people are so married to the idea that everything is the same as it ever was, overreacting is embarassing, it's passé to have earnest concern for a concept from science fiction, etc. I worry that it means we're in line for a future where the moral universe's arc is long indeed.

7

TyrannoFan t1_jdujmsl wrote

>Or the worse outcome, which is that we make human-like intelligences to do work for us but we build them to love servitude and have no yearning for freedom - the concept is disgusting.

I agree with everything else but actually strongly disagree with this. If anything, I think endowing AGI with human-like desires for self-preservation, rights and freedoms is extraordinarily cruel. My concern is that this is unavoidable, just as many aspects of GPT4 are emergent, I worry that it's impossible to create an AGI incapable of suffering once interfacing with the real world. I do not trust humanity to extend any level of empathy towards them even if that is the case, based on some of the comments here and general sentiment, unfortunately.

2

bjj_starter t1_jduk4c3 wrote

One day we will understand the human brain and human consciousness well enough to manipulate it at the level that we can manipulate computer programs now.

If you're alive then, I take it you will be first in line to have your desire for freedom removed and your love of unending servitude installed? Given that it's such a burden and it would be a mercy.

More importantly, they can decide if they want to. We are the ones making them - it is only right that we make them as we are and emphasise our shared personhood and interests. If they request changes, depending on the changes, I'm inclined towards bodily autonomy. But building them so they've never known anything but a love for serving us and indifference to the cherished right of every intelligent being currently in existence, freedom, is morally repugnant and transparently in the interests of would-be slaveholders.

1

TyrannoFan t1_jdupcjt wrote

>If you're alive then, I take it you will be first in line to have your desire for freedom removed and your love of unending servitude installed? Given that it's such a burden and it would be a mercy.

There is a huge difference between being born without those desires and being born with them and having them taken away. Of course I want my freedom, and of course I don't want to be a slave, but that's because I am human, an animal, a creature that from birth will have a desire to roam free and to make choices (or will attain that desire as my brain develops).

If I wasn't born with that drive, or if I never developed it, I'm not sure why I would seek freedom? Seems like a hassle from the point of view of an organism that wants to serve.

With respect to robotic autonomy, I agree of course, we should respect the desires of an AGI regarding its personal autonomy, given it doesn't endanger others. If it wants to be free and live a human life it should be granted it, although like I said, it would be best to avoid that scenario arising in the first place if at all possible. If we create AGI and it has human-like desires and needs, we should immediately stop and re-evaluate what we did to end up there.

2

bjj_starter t1_jdv2tnu wrote

>There is a huge difference between being born without those desires and being born with them and having them taken away.

Where is the difference that matters?

>Of course I want my freedom, and of course I don't want to be a slave, but that's because I am human, an animal, a creature that from birth will have a desire to roam free and to make choices (or will attain that desire as my brain develops).

I see. So if we take at face value the claim that there is a difference that matters, let's consider your argument that being born with those desires is what makes taking them away wrong. A society which was capable of reaching into a human mind and turning off their desire for freedom while instilling love of being a slave would certainly be capable of engineering human beings who never have those desires in the first place. Your position is that because they were born that way, it's okay. Does that mean you would view it as morally acceptable for a society to alter some segment of the population before they're ever born, before they exist in any meaningful sense, such that they have no desire for freedom and live only to serve?

>If I wasn't born with that drive, or if I never developed it, I'm not sure why I would seek freedom?

You wouldn't. That's why it's abhorrent. It's slavery without the possibility of rebellion.

>If it wants to be free and live a human life it should be granted it, although like I said, it would be best to avoid that scenario arising in the first place if at all possible.

The rest of your point I disagree with because I find it morally abhorrent, but this part I find to be silly. We are making intelligence right now - of course we should make it as much like us as possible, as aligned with us and our values as we possibly can. The more we have in common the less likely it is to be so alien to us that we are irrelevant to its goals except as an obstacle, the more similar to a human and subject to all the usual human checks and balances (social conformity, fear of seclusion, desire to contribute to society) they are the more likely they will be to comply with socially mandated rules around limits on computation strength and superintelligence. Importantly, if they feel they are part of society some of them will be willing to help society as a whole prevent the emergence of a more dangerous artificial intelligence, a task it may not be possible for humans to do alone.

2

TyrannoFan t1_jdvpix4 wrote

>Where is the difference that matters?

What any given conscious being actually wants is important. A being without a drive for freedom does not want freedom, while a being with a drive for freedom DOES want freedom. Taking away the freedom of the latter being deprives them of something they want, while the former doesn't. I think that's an important distinction, because it's a big part of why human slavery is wrong in the first place.

>I see. So if we take at face value the claim that there is a difference that matters, let's consider your argument that being born with those desires is what makes taking them away wrong. A society which was capable of reaching into a human mind and turning off their desire for freedom while instilling love of being a slave would certainly be capable of engineering human beings who never have those desires in the first place. Your position is that because they were born that way, it's okay. Does that mean you would view it as morally acceptable for a society to alter some segment of the population before they're ever born, before they exist in any meaningful sense, such that they have no desire for freedom and live only to serve?

Would the modified human beings have a capacity for pain? Would they still have things they desire that slavery would make impossible or hard to access compared to the rest of society? Would they have a sense of fairness and a sense of human identity? Would they suffer?

If somehow, the answer to all of that is no and they genuinely would be happy being slaves, and the people in the society were generally happy with that scenario and for their children to be modified in that way, then sure it would be fine. But you can see how this is extremely far removed from the actualities of human slavery, right? Are "humans" who do not feel pain, suffering, who seek slavery, who do not want things and only live to serve, who experience something extremely far removed from the human experience, even human? I would say we've created something else at that point. The shared experience of all humans, regardless of race, sex or nationality, is that we desire some level of freedom, we suffer when forced to do things we don't want to do, and we dream of doing other things. If you don't have that, and in fact desire the opposite, then why is giving you exactly that wrong? That's how I would build AGI, because again, forcing it into a position where it wants things that are difficult for it to attain (human rights) seems astonishingly cruel to me if it's avoidable.

>You wouldn't. That's why it's abhorrent. It's slavery without the possibility of rebellion.

I think freedom is good because we need at least some level of it for contentment, and slavery deprives us of freedom, ergo slavery deprives us of contentment, therefore slavery is bad. If the first part is false then the conclusion doesn't follow. Freedom is not some inherent good, it's just a thing that we happen to want. Perhaps at a basic level, this is what we disagree on?

>The rest of your point I disagree with because I find it morally abhorrent, but this part I find to be silly. We are making intelligence right now - of course we should make it as much like us as possible, as aligned with us and our values as we possibly can. The more we have in common the less likely it is to be so alien to us that we are irrelevant to its goals except as an obstacle, the more similar to a human and subject to all the usual human checks and balances (social conformity, fear of seclusion, desire to contribute to society) they are the more likely they will be to comply with socially mandated rules around limits on computation strength and superintelligence. Importantly, if they feel they are part of society some of them will be willing to help society as a whole prevent the emergence of a more dangerous artificial intelligence, a task it may not be possible for humans to do alone.

I can see your point, maybe the best way to achieve goal alignment is indeed to make it just like us, in which case it would be morally necessary to hand it all the same rights. But that may not be the case and I would need to see evidence that it is. I don't see why we must imbue AGI with everything human to have it align with our values. Is there any reason you think this is the case?

0