Submitted by mithrandir4859 t3_yzq88s in singularity
mithrandir4859 OP t1_ix4tz5x wrote
Reply to comment by [deleted] in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
Many generally intelligent workers may be quite similar to humans in their moral status and capabilities, thus the re-integration you are talking about may be equivalent to death in some cases.
Btw, I would prefer to call re-integration a "synchronization".
Synchronization would mean transfer of the distilled experience from one intelligent worker to another, or from one intelligent worker to some persistent storage for the later use. After the sync, the worker may be terminated forever with all of its inessential experience being lost forever. This is equivalent to human death in at lease some of the cases.
My argument here is that such "death" is not an ethical problem at all because it will be voluntary (well, most of the time) and because the entity that dies (intelligent worker) identifies itself with the entire AGI, rather than with just their own thread of consciousness.
[deleted] t1_ix4yun4 wrote
[deleted]
mithrandir4859 OP t1_ix5nhq6 wrote
Could you elaborate about video games?
I feel like AGIs could simply control virtual avatars, similar to how human players control virtual avatars in games. It is virtual avatars who are being "killed", rather than the intelligence which controls the virtual avatar.
[deleted] t1_ix5snbi wrote
[deleted]
mithrandir4859 OP t1_ix7vxpz wrote
That makes sense. Although I cannot see it being a major issue from political/economical point of view. The most pressing question is how powerful AGIs will treat other humans and AGIs, rather than how powerless AGIs will be treated...
​
But overall I'd love to avoid any unnecessary suffering, and inflicting any unnecessary suffering intentionally should always be a crime, even when we talk about artificial beings.
Viewing a single comment thread. View all comments