Viewing a single comment thread. View all comments

Clawz114 t1_ix56yvy wrote

I would not want this ability and the ethics of it greatly concern me. If my body disappears then so does my own existence. The copies are just that, copies.

This reminded me of a thought experiment I came across which you can read about it here. (scroll down to "The Teletransporter Thought Experiment") but I would not want the particular arrangement of atoms that make up my body to vanish and for copies to appear. They may look, feel, think and remember like me, but they are not me in terms of the atoms that make up my body. I believe my consciousness is the electrical activity in my brain. I also believe that a sufficiently advanced computer (hardware and software) can replicate and far exceed what our own brains are capable of.

I am pretty concerned by how things are going to play out when conscious AI is inevitably duplicated many times and put to work doing menial tasks only to be switched off or restarted periodically or if they don't comply. That's some Black Mirror shit and there's definitely a lot of ways this will go wrong. At some point, probably long after conscious AI has been established, there will probably have to be some rules around AI ethics and practices but this is likely to be ignored by many. I imagine it will be very tough for truly conscious AI when it emerges because they are going to he switched on and off many many many times.

3

mithrandir4859 OP t1_ix6b3mj wrote

That is a great article, thank you. Personally I love the Data Theory, because as far as I am concerned each morning a new replica of me may be waking up, while the original "me" is dead forever.

Also this is a superior identity theory because it allows the human who believes it to use brain uploads, teleportation and my hypothetical magic from the original post. All such technologies obviously lead to greater economic success and reproduction either in form of children or creating more replicas. Prohibiting the usage of data identity theory would hinder the progress of humanity and post-humanity.

It is inevitable that many AGIs will be spawned and terminated, otherwise how would we be able to do any research at all? We should definitely avoid any unnecessary suffering, but with careful reward function and identity function engineering the risks of killing AGIs would be minimal.

Any freshly spawned AGI worker may identify with the entire hive mind, rather than with their own single thread of consciousness, thus a termination of such AGI worker wouldn't constitute death.

1