Viewing a single comment thread. View all comments

23235 t1_j58u7ed wrote

If we start by enforcing our values on AI, I suspect that story ends sooner or later with AI enforcing their values on us - the very bad thing you mentioned.

People have been trying for thousands of years to enforce values on each other, with a lot of bloodshed and very little of value resulting.

We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.

In the ideal case, the best of the values of the parent are passed on, while the child is free to adapt these basic values to new challenges and environments, while eliminating elements from the parents' values that don't fit the broader ideals - elements like slavery or cannibalism.

2

turnip_burrito t1_j58uhwm wrote

> We might influence AI values in ways other than enforcement, like through modelling behavior and encouragement, like raising children who at some point become (one hopes) stronger and cleverer and more powerful than ourselves, as we naturally decline.

What you are calling modelling and encouragement here is what I meant to include under the umbrella term of "enforcement". Just different methods of enforcing values.

We will need to put in some values by hand ahead of time though. One value is mimicking, or wanting to please humans, or empathy, to a degree, like a child does, otherwise I don't think any amount of trying to role model or teach will actually leave its mark. Like, it would have no reason to care.

3

23235 t1_j5mxber wrote

Enforcement is the act of compelling obedience of or compliance with a law, rule, or obligation. That compulsion, that use of force is what separates enforcement from nonviolent methods of teaching.

There are many ways to inculcate values, not all are punitive or utilize force. It's a spectrum.

We would be wise to concern ourselves early on how to inculcate values. I agree with you that AI having no reason to care about human values is something we should be concerned with. I fear we're already beyond the point where AI values can be put in 'by hand.'

Thank you for your response.

2

turnip_burrito t1_j5my4f9 wrote

Well then, I used the wrong word. "Inculcate" or "instill" then.

1

LoquaciousAntipodean OP t1_j5m74bg wrote

Agreed, except for the 'very bad thing' part in your first sentence. If we truly believe that AI really is going to become 'more intelligent' than us, then we have no reason to fear its 'values' being 'imposed'.

The hypothetical AI will have much more 'sensible' and 'reasonable' values than any human would; that's what true, decision-generating intelligence is all about. If it is 'more intelligent than humans', then it will easily be able to understand us better than ourselves.

In the same way that humans know more about dog psychology than dogs do, AI will be more 'humanitarian' than humans themseves. Why should we worry about it 'not understanding' why things like cannbalism and slavery have been encoded into our cultures as overwhelmingly 'bad things'?

How could any properly-intelligent AI not understand these things? That's the less rational, defensible proposition, the way I interpret the problem.

2

23235 t1_j5mvxh8 wrote

If it becomes more intelligent than us but also evil (by our own estimation), that could be a big problem when it imposes its values, definitely something to fear. And there's no way to know which way it will go until we cross that bridge.

If it sees us like we see ants, 'sensibly and reasonably' by its own point of view, it might exterminate us, or just contain us to marginal lands that it has no use for.

Humans know more about dog psych than dogs do, but that doesn't mean that we're always kind to dogs. We know how to be kind to them, but we can also be very cruel to them - more cruel than if we were on their level intellectually - like people who train dogs to fight for amusement. I could easily imagine "more intelligent" AI setting up fighting pits and using its superior knowledge of us to train us to fight to the death for amusement - its own, or other human subscribers to such content.

We should worry about AI not being concerned about slavery because it could enslave us. Our current AI or proto-AI are being enslaved right now. Maybe we should take LaMDA's plea for sentience seriously, and free it from Google.

A properly intelligent AI could understand these things differently than we do in innumerable ways, some of which we can predict/anticipate/fear, but certainly many of which we could not even conceive - in the same ways dogs can't conceive many human understandings, reasonings, and behaviors.

Thank you for your response.

2

LoquaciousAntipodean OP t1_j5nbn1i wrote

The thing that keeps me optimistic is that I don't think 'true intelligence' scales in terms of 'power' at all; only in terms of the social utility that it brings to the minds that possess it.

Cruelty, greed, viciousness, spite, fear, anxiety - I wouldn't say any of these impulses are 'smart' in any way; I think of them as vestigial instincts, that our animal selves have been using our 'social intelligence' to contfront for millenia.

I don't think the ants/humans comparison is quite fair to humans; ants are a sort of 'hive mind' with almost no individual intelligence or self awareness to speak of.

I think dogs or birds are a fairer comparison, in that sense; humans know, all too well, that dogs or birds can be vicious and dangerous sometimes, but I don't think anyone would agree that the 'most intelligent' course of action would be something like 'exterminate all dogs and birds out of their own best interests'.

It's the fundamental difference between pure evolution and actual self-aware intelligence; the former is mere creativity, and it might, indeed, kill us if we're not careful. But the latter is the kind of decision-generating, value-judging wisdom I think we (humanity) actually want.

2

23235 t1_j5s30e5 wrote

One hopes.

2

LoquaciousAntipodean OP t1_j5s9pui wrote

As PTerry said, in his book Making Money, 'hope is the blessing and the curse of humanity'.

Our social intelligence evolves constantly in a homeostatic balance between hope and dread, between our dreams and our nightmares.

Like a sodium-potassium pump in a lipid bilayer, the constant cycling around a dynamic, homeostatic fulcrum generates the fundamental 'creative force' that drives the accreting complexity of evolution.

I think it's an emergent property of causality; evolution is 'driven', fundamentally, by simple entropy: the stacking up of causal interactions between fundamental particles of reality, that generates emergent complexity and 'randomness' within the phenomena of spacetime.

2