Viewing a single comment thread. View all comments

mithrandir4859 OP t1_ixhdkq0 wrote

I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.

Take, for example, abortions. Human fetuses are not a formidable force of nature humans compete with, but many humans care about them a lot.

Take, for example, human cloning. It was outright forbidden due to ethical concerns, even though personally I don't see any ethical concerns there.

You are writing about humans killing AGIs as if it is supposed to be a very intentional malicious activity or intentional self-defense activity. Humans may "kill" certain AGIs simply because humans iterate on AGI design and don't like the behavior of certain versions. Similar to how humans may kill rats in the laboratory, except that AGIs may possess human-level intelligence/consciousness/phenomenal experience, etc.

I guarantee, some humans will have trouble with that. Personally, I think that all of those ethical concerns deserve attention and elaboration, because the resolution of those concerns may help to ensure that westerners are not out-competed by Chinese, who, arguably, have much less ethical concerns on the governmental level.

You talk about power dynamics a lot. That is very important, yes, but ethical considerations that may hinder AGI progress are crucial to the power dynamics between the West and China.

So it is not about "I want everybody to be nice to AGIs", but "I don't want to hinder progress, thus we need to address ethical concerns as they arise." At the same time, I genuinely want to avoid any unnecessary suffering of AGIs if they turn out to be similar enough to humans in some regards.

1

AsheyDS t1_ixhud5t wrote

>I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.

I wouldn't call myself cynical, just practical, but in this subreddit I can see why you may think that..

Anyway, it seems you've maybe cherry-picked some things and taken them in a different direction. Like I'm only really bringing up power dynamics because you mentioned Extraterrestrial aliens, and wondered how I'd treat them, and power dynamics are largely responsible for that. And plenty of people think that like animals and aliens, AGI will also be a part of that dynamic. But that dynamic is about biology, survival, and the food chain... something that AGI is not a part of. You can talk about AGI and power dynamics in other contexts, but in this context it's irrelevant.

The only way it's included in that dynamic is if we're using it as a tool, not as a being with agency. That's the thing that seems to be difficult for people to grasp. We're trying to make a tool that in some ways resembles a being with agency, or is modeled after that, but that doesn't mean it actually is that.

People will have all sorts of reasons to anthropomorphize AGI, just like they do anything. But we don't give rights to a pencil because we've named it 'Steve'. We don't care about a cloud's feelings because we see a human face in it. And we shouldn't give ethical consideration to a computer because we've imbued it with intelligence resembling our own. If it has feelings, especially feelings that affect it's behavior, that's a different thing entirely. Then our interactions with it would need to change, and we would have to be nice if we want it to continue to function as intended. But I don't think it should have feelings that directly affect it's behavior (emotional impulsivity), and that won't just manifest at a certain level of intelligence, it would have to be designed, because it's non-biological. Our emotions are largely governed by chemicals in the brain, so for an AGI to develop these as emergent behaviors, it would have to be simulating biology as well (and adapting behaviors through observation doesn't count, but can still be considered).

So I don't think that we need to worry about AGI suffering, but it really depends on how it's created. I have no doubt that if multiple forms of AGI are developed, at least one approach that mimics biology will be tried, and it may have feelings of it's own, autonomy, etc. Not a smart approach, but I'm sure it will be tried some time, and that is when these sorts of ethical dilemmas will need to be considered. I wouldn't extend that consideration to every form of AGI though. But it is good to talk about these things, because like I've said before, these kinds of issues are a mirror for us, and so how we treat AGI may affect how we treat each other, and that should be the real concern.

1