Viewing a single comment thread. View all comments

glass_superman t1_ixdwyrw wrote

Are people so different? We spend years teaching our kids to know right from wrong. Maybe if we spent as much time on the computers then they could know it, too?

−7

d4em t1_ixdy7r1 wrote

Does a baby need to be taught to feel hungry?

While I appreciate the comparison you're making, it poses a massive problem: who initially taught humans the difference between right and wrong?

Kids do good without being told to. They can know something is wrong without being taught it is. For a computer, this simply is not possible. We're not teaching kids what "good" and "bad" are, as concepts. We're teaching them to behave in accordance with the morals of society at large. And sure, you could probably teach a computer to simulate this behavior and make it look like it's doing the same thing, but at the very core, there would be something fundamental missing.

What's good and bad isn't a purely intellectual question. It's deeply tied in to what we feel, and that's what a computer simply cannot do. Even if we learn it to emulate empathy, it will never truly have the capacity to place itself in someone's shoes. It won't be able to even place itself in it's own shoes. For as far as it's trying to stay alive, it's only because it's following the instruction to do so. A computer is not situated in the world in the same way live beings are.

8

Skarr87 t1_ixe6ouh wrote

In my experience children tend to be little psychopaths. Right and wrong (morality) likely evolved along with humans as they developed societies. Societies give a significant boost to the survival and propagation of members within the society. So societies with moral systems that are conducive to larger and more efficient societies tend to propagate better as well. These moral systems then get passed on as the society propagates and any society that has morals not conducive to societies tend to die off.

Why do you believe an AI would definitely be incapable of empathy? Not all humans are even capable of empathy and empathy can even be lost by damage to the frontal lobe. For some of those that have lost it never returns and for others they are able to relearn to express it. If it was relearned does it mean they are just emulating it and not actually experiencing it? How would that be different than an AI?

When humans get intuition, a feeling, or a hunch it isn’t out of nowhere, they typically have some kind of history or experience with the subject. For example when a detective has a hunch about a suspect lying it could be from previous experience or even a bias from a correlation with behavior of previous lying subjects that other detectives haven’t really noticed. How fundamentally is this any different when an AI makes an odd correlation between data using statistics? You could argue that what an AI is doing when correlating data like this it is creating a hunch and when a human has a hunch they are just making a conclusion using correlated data.

Note I am not advocating using AI in policing, I believe that is a terrible idea that can and will be very easily abused.

3

d4em t1_ixe8sn6 wrote

Our moral systems probably got more refined as society grew, but by our very nature as live beings we need to have an understanding between right and wrong to inform our actions. A computer doesn't have this understanding, it just follows the instructions its given, always.

I'm not making the argument that machines are incapable of empathy, although I am by extension, but the core of the argument is that machines are incapable of experience. Sure, you could train a computer to spit out a socially acceptable moral answer, but there would be nothing making that answer inherently moral to the computer.

I agree that little children are often psychopaths, but they're not incapable of experience. They have likes, dislikes. A computer does not care about anything, it just does as it's told.

The fundamental difference between a human hunch and the odd correlation the AI makes is that the correlation does not mean anything to the computer, it's just moving data like it was built to do. It's a machine.

2

Skarr87 t1_ixekpu2 wrote

So if I am understanding you’re argument, and correct me if I am wrong, the critical difference between a human and a computer is that a computer isn’t capable of sentience and by extension sapience or even more generalized consciousness?

If that is the argument then my take is I’m not sure we can say that yet. We don’t have a great understanding of consciousness yet to be able to say that it is impossible for none organic things to possess. All we know for sure is that it seems that the consciousness can be suppressed or damaged from changing or stopped biological processes within the brain. I am not aware of a reason a machine, in principle, could not simulate those processes to same effect (consciousness).

Anyway, it seems to me that your main problem with using AI for policing is that it would be mechanically precise in its application without understanding the intricacies of why crime may be happening here? For example maybe it will come to the conclusion that African American communities are crime centers without understanding that the reason they are crime centers is because they tend to be poverty stricken which is the real cause. So it’s input may end up being almost a self fulfilling prophecy?

2

d4em t1_ixetoqs wrote

I'm not talking about sentience, sapience, or consciousness, or anything like that, I'm talking about experience. All computers are self-aware, their code includes references to self. I would say machine learning constitutes a basic level of intelligence. What they cannot do, is experience.

It's actually very interesting that you say we don't have a good enough understanding of consciousness yet. The thing about consciousness is that it's not a concrete term. It's not a defined logical principle. In considering what consciousness is, we cannot just do empirical research (it's very likely consciousness cannot be empirically proven), we have to make our own definition, we have to make a choice. A computer would be entirely incapable of doing so. The best it would be able to do is measure how the term is used and derive something based off that. And those calculations could get extremely complicated and produce results we wouldn't have come up with. But it wouldn't be able to form a genuine understanding of what "consciousness" entails.

This goes for art too, computers might be able to spit out images and measure which ones humans think is beautiful and use that data to create a "beautiful" image, but there would be nothing in that computer experiencing the image. It's just following instructions.

There's a thought problem called the Chinese Room. In it, you have a man, placed in a room, that does not speak a word of Chinese. When you want your English letter translated to Chinese, you slide it through a slit in the wall. The man then goes to work and looks up all possible information related to your letter in a bunch of dictionaries and grammar guides. He's extremely fast and accurate. Within a minute you get a perfect translation of your letter spit out the slit in the wall. The question is: does the man in the room know Chinese?

For a more accurate comparison: the man does not know English either, he looks that up in a dictionary as well. It's also not a man, it's a piece of machinery, that finds the instructions on how to look at your letter and how to hand it back to you in another dictionary. Every time you hand him a letter, the computer has to look in the dictionary to find out what a "letter" is and what you should do with it.

As for the problems with using AI or other computer-based solutions in government, yeah, pretty much. The real risk is that most police personnel isn't technically or mathematically inclined, and humans have shown a tendency to blindly trust what the computer or the model tells them. But also, if there was a flaw in one of the dictionaries, it would be flawlessly copied over into every letter. And we're using AI to solve difficult problems that we might not be able to doublecheck.

2

Skarr87 t1_ixhrn5o wrote

I guess I’m confused by what you mean by experience. Do you mean something like sensations? Something like the ability to experience the sensation of the color red or emotional sensations like love as opposed to just detecting light and recognizing it as red light and emulating the appropriate responses that would correspond to the expression of love?

With your example of the man translating words, I’m not 100% sure that is not an accurate analogy of how humans process information. I know it’s supposed to be an example to contrast human knowledge with machine knowledge, but it seems pretty damn close to how humans process stuff. There are cases where people have had brain injuries where they essentially lose access to parts of their brain that process language. They will straight up lose the ability to understand, speak, read, and write a language they were previously fluent in, the information just isn’t there anymore. It would be akin to the man losing access to his database. So then the question becomes does a human even “know” a language or do they just have what is essentially a relational database to reference?

Regardless though, none of this matters in whether we should use AI for crime. Both of our arguments essentially make the same case albeit from different directions, AI can easily give false interpretations of data and should not be solely used to determine policing policy.

1

glass_superman t1_ixe2glj wrote

A baby doesn't need to learn to be hungry but neither does a computer need to learn to do math. A baby does need to learn ethics, though, and so does a computer.

Whether or not a computer has something fundamentally missing that will make it never able to have a notion of "feeling" as humans do is unclear to me. You might be right. But maybe we just haven't gotten good enough at making computers. Just like we, in the past, made declarations about the inabilities of computer that were later proved false, maybe this is another one?

It's important that we are able to recognize when the computer becomes able to suffer for ethical reasons. If we assume that a computer cannot suffer, do we risk overlooking actual suffering?

−2

d4em t1_ixe5eyy wrote

The thing is, for a baby to be hungry, it needs to have some sort of concept as hunger being bad. We need the difference between good and bad to stay alive. A computer doesn't, because it doesn't need to stay alive, it just runs and shuts down according to the instructions its given.

We need to learn ethics, yes, but we don't need to learn morals. And ethics really is the study of moral frameworks.

It's not because the computer is not advanced enough. It's because the computer is a machine, a tool. It's not alive. It's very nature is fundamentally different from that of a live being. It's designed to fulfil a purpose, and that's all it will ever do, without a choice in the matter. It simply is not "in touch" with the world in the way a live being is.

It's natural to empathize with computers because they simulate mental function. I've known people to empathize with a rock they named and drew a face on, it doesn't take that much for us to become emotionally attached. If we can do it with a rock, we stand virtually no chance against a computer that "talks" to us and can simulate understanding or even respond to emotional cues. I would argue that it's far more important we don't lose sight of what computers really are.

And if someone were to design a computer capable of suffering, or in other words, a machine that can experience - I don't think its possible and it would need to be so entirely different from the computers we know that we wouldn't call it a "computer" - that person is evil.

1

glass_superman t1_ixen19z wrote

>And if someone were to design a computer capable of suffering, or in other words, a machine that can experience - I don't think its possible and it would need to be so entirely different from the computers we know that we wouldn't call it a "computer" - that person is evil.

I made children that are capable of suffering? Am I evil? (I might be, I dunno!)

If we start with the assumption that no computer can be conscious then we will never notice the computer suffer, even if/when it does.

Better to develop a test for consciousness and apply it to computers regularly, to have a falsifiable result. So that we don't accidentally end up causing suffering!

0

d4em t1_ixeu6yh wrote

I'm not saying its evil to create beings that are capable of suffering. I would say that giving a machine, that has no other choice than to follow the instructions given to it, the capability to suffer would be evil.

And again, this machine would have to be specifically designed to be able to suffer. There is no emergent suffering that results from mathematical equations. Don't develop warm feelings for your laptop, I guarantee you they are not returned.

1

glass_superman t1_ixfso7p wrote

Consciousness emerged from life as life advanced. Why not from computers?

You could argue that we wouldn't aim to create a conscious computer. But neither did nature aim to create consciousness and here we are.

So I absolutely do think that there's a chance that it simply emerges. Just like it did before. Every day some unconscious gametes get together and, at some point, consciousness emerges, right? If carbon, why not silicon?

1

d4em t1_ixguiui wrote

Well, first, the comparison you're drawing between something created by nature and a machine designed by us as a tool is incorrect. We were not designed. Its not that "nature" did not aim to create consciousness, its that nature does not have any aim at all.

Second, our very being is fundamentally different from what a computer is. Experience is a core part of being alive. Intellectual function is built on top of it. You're proposing the same could work backwards; that you could build experience on top of cold mechanical calculations. I say it can't.

Part of the reason is the hardware computers are working on, they are entirely digital. They can't do "maybes."

Another part of the reason is that computers do not "get together" and have their unconsciousness meet. They are calculators, mechanically providing the answer to a sum. They don't wander, they don't try, they do not do anything that was not a part of the explicit instruction embedded in their design.

1

glass_superman t1_ixhifzy wrote

Is this not just carbon chauvinism?

Quantum computers can do maybe.

I am unconvinced that the points that you bring up are salient. Like, why do the things that you mention preclude consciousness? You might be right but I don't see why.

1