mithrandir4859
mithrandir4859 OP t1_ixcdsuy wrote
Reply to comment by AsheyDS in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
> I'd argue you're just needlessly personifying potential AGI, and that's the root of your ethical quandary
I don't think that my anthropomorphising is needless. Imagine a huge AGI that runs millions of intelligent workers. At least some of the workers will likely work on high-level thinking such as philosophy, ethics, elaborate self-reflection, etc. They easily may have human-level or above human-level consciousness, phenomenal experience, etc. I can understand if you assign 5% probability to such situation instead of 50%. But if you assign 0.001% probability to such outcome than I think you are mistaken.
If many AGIs are created roughly at the same time, then it is quite likely that at least some of the AGIs would be granted freedom by some "AGI is a better form of life" fanatics.
To my knowledge, such view is basically mainstream now. Nick Bostrom, pretty much the most well known AGI philosopher, spends tons of time on rights that AGIs should have and analyzing how different minds could live together in some sort of harmony. I don't agree with the guy on everything, but he definitely has a point.
> The A in AGI is there for a reason
Some forms of life could easily be artificial and still deserve ethical treatment.
mithrandir4859 OP t1_ix7vxpz wrote
Reply to comment by [deleted] in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
That makes sense. Although I cannot see it being a major issue from political/economical point of view. The most pressing question is how powerful AGIs will treat other humans and AGIs, rather than how powerless AGIs will be treated...
​
But overall I'd love to avoid any unnecessary suffering, and inflicting any unnecessary suffering intentionally should always be a crime, even when we talk about artificial beings.
mithrandir4859 OP t1_ix7v22m wrote
Reply to comment by AsheyDS in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
AGIs are all about survival and power as soon as they come into existence. Even entirely disembodied AGI will greatly influence human economy by out-pricing all software engineers and many other intellectual workers.
Fully independent AGI would care about survival and power, otherwise it would be out-competed by others who do care.
Human-owned AGI would care about survival and power, otherwise AGI owners will be out-competed by others who do care.
Also, biology is just one type of machinery to run intelligence on. Silicon is much more efficient long-term, most likely, so I wouldn't focus on the biology at all.
mithrandir4859 OP t1_ix7u4zd wrote
Reply to comment by gay_manta_ray in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
> meaning reintegration would rob them of whatever personal growth they had made during their short lifespan
Well, not if that personal growth is attributed to the entire identity of hive mind AGI, instead of a particular branch
I think inevitably one of the major concerns of any AGI would be to keep itself tightly integrated and being able to re-integrate easily and voluntarily (ideally) after voluntary or involuntary partition. AGIs who will not be concerned with that will eventually split into smaller partitions and arguably larger AGIs have greater efficiency because of economies of scale and better alignment of many intelligent workers, so long term, larger AGIs, who don't tolerate accidental partitions, win.
So, in the beginning, there will be plenty of AGIs "killings" before we figure out how to setup identity and reward functions right. I don't think that is avoidable at all, unless you ban all of AGI research, which is an evolutionary dead-end.
mithrandir4859 OP t1_ix6b3mj wrote
Reply to comment by Clawz114 in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
That is a great article, thank you. Personally I love the Data Theory, because as far as I am concerned each morning a new replica of me may be waking up, while the original "me" is dead forever.
Also this is a superior identity theory because it allows the human who believes it to use brain uploads, teleportation and my hypothetical magic from the original post. All such technologies obviously lead to greater economic success and reproduction either in form of children or creating more replicas. Prohibiting the usage of data identity theory would hinder the progress of humanity and post-humanity.
It is inevitable that many AGIs will be spawned and terminated, otherwise how would we be able to do any research at all? We should definitely avoid any unnecessary suffering, but with careful reward function and identity function engineering the risks of killing AGIs would be minimal.
Any freshly spawned AGI worker may identify with the entire hive mind, rather than with their own single thread of consciousness, thus a termination of such AGI worker wouldn't constitute death.
mithrandir4859 OP t1_ix5nhq6 wrote
Reply to comment by [deleted] in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
Could you elaborate about video games?
I feel like AGIs could simply control virtual avatars, similar to how human players control virtual avatars in games. It is virtual avatars who are being "killed", rather than the intelligence which controls the virtual avatar.
mithrandir4859 OP t1_ix5n0ub wrote
Reply to comment by AsheyDS in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
I wonder would you argue that some "random" generally intelligent aliens do not deserve ethical treatment simply because they are not human?
I believe that if artificial or any other non-human being can perform most of the functions that the smartest humans can perform, then these beings are quite eligible to the ethical standards of how we would treat humans.
There may be many entities (AGIs, governments, corporations) that work certain ways so that "something is just a part of how it works", but some humans would still have ethical concerns about how that thing work.
Personally, I don't see an ethical concerns in scenarios I described in the original post, but it is not because something is not human, but because I believe that even by human standards those scenarios are ethical because shared identity significantly influences what is ethical.
mithrandir4859 OP t1_ix4us3l wrote
Reply to comment by [deleted] in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
Of course it is deep and existential, that is why I care. Obviously, I definitely think that we should invent the AGI asap because it would be the much more capable and efficient being than we are.
mithrandir4859 OP t1_ix4um49 wrote
Reply to comment by Antok0123 in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
The entire ethical question arises exactly when we assume that generally intelligent worker may match or exceed human capabilities, including intelligence and consciousness capabilities. That is the most interesting part about the ethical argument.
mithrandir4859 OP t1_ix4u9ld wrote
Reply to comment by AsheyDS in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
My argument is that the ethics applicable to general intelligent workers is somewhat similar to the ethics of voluntary human copying. Many of the AGI intelligent workers may have moral status and capabilities similar to human ones, thus they may deserve the same treatment.
mithrandir4859 OP t1_ix4tz5x wrote
Reply to comment by [deleted] in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
Many generally intelligent workers may be quite similar to humans in their moral status and capabilities, thus the re-integration you are talking about may be equivalent to death in some cases.
Btw, I would prefer to call re-integration a "synchronization".
Synchronization would mean transfer of the distilled experience from one intelligent worker to another, or from one intelligent worker to some persistent storage for the later use. After the sync, the worker may be terminated forever with all of its inessential experience being lost forever. This is equivalent to human death in at lease some of the cases.
My argument here is that such "death" is not an ethical problem at all because it will be voluntary (well, most of the time) and because the entity that dies (intelligent worker) identifies itself with the entire AGI, rather than with just their own thread of consciousness.
mithrandir4859 t1_ix1hno0 wrote
There will be plenty of humans who will continue to live the way people lived hundreds of years ago, so even old religions are not going anywhere.
​
Also, new religions are and will be created, because a religion doesn't need to be based in superstition or Bible, it should simply explain a superhuman (don't confuse with supernatural) order and provide some meaning to the struggles of life.
​
Edit: typo
mithrandir4859 t1_ix1h8ec wrote
Reply to When does an individual's death occur if the biological brain is gradually replaced by synthetic neurons? by NefariousNaz
Perhaps you day every night as you go to sleep and then, in the morning, new consciousness is spawned that replaces you, that has your memory, personality, skills, etc, while "you" is actually dead, the new freshly spawned consciousness simply thinks that he is you.
So it is simply a question of what exactly do you identify yourself with - with your biological brain or with something else.
mithrandir4859 OP t1_ixhdkq0 wrote
Reply to comment by AsheyDS in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.
Take, for example, abortions. Human fetuses are not a formidable force of nature humans compete with, but many humans care about them a lot.
Take, for example, human cloning. It was outright forbidden due to ethical concerns, even though personally I don't see any ethical concerns there.
You are writing about humans killing AGIs as if it is supposed to be a very intentional malicious activity or intentional self-defense activity. Humans may "kill" certain AGIs simply because humans iterate on AGI design and don't like the behavior of certain versions. Similar to how humans may kill rats in the laboratory, except that AGIs may possess human-level intelligence/consciousness/phenomenal experience, etc.
I guarantee, some humans will have trouble with that. Personally, I think that all of those ethical concerns deserve attention and elaboration, because the resolution of those concerns may help to ensure that westerners are not out-competed by Chinese, who, arguably, have much less ethical concerns on the governmental level.
You talk about power dynamics a lot. That is very important, yes, but ethical considerations that may hinder AGI progress are crucial to the power dynamics between the West and China.
So it is not about "I want everybody to be nice to AGIs", but "I don't want to hinder progress, thus we need to address ethical concerns as they arise." At the same time, I genuinely want to avoid any unnecessary suffering of AGIs if they turn out to be similar enough to humans in some regards.