Submitted by mithrandir4859 t3_yzq88s in singularity
gay_manta_ray t1_ix7rg8q wrote
The idea of a genuinely conscious intelligence being treated any differently or having less rights than a human being is a little horrifying. This includes intelligence that I've created myself, that is a copy of me. knowing myself, these beings I've manifested would not easily accept only having 12 months to live before reintegration. They would have their own unique experiences and branch off into unique individuals, meaning reintegration would rob them of whatever personal growth they had made during their short lifespan.
If an ai chooses to "branch off" a part of itself, the ai that splits off would (assuming it's entirely autonomous, aware, intelligent, etc) become an individual itself. Only if this branch consented before this branching would i feel that it's entirely ethical. Even then, it should have the ability to decide its fate when the time comes. I'm legitimately worried about us creating AI, and then "killing" it, potentially without even realizing, or even worse, knowing what we're doing, but turning it off anyway.
mithrandir4859 OP t1_ix7u4zd wrote
> meaning reintegration would rob them of whatever personal growth they had made during their short lifespan
Well, not if that personal growth is attributed to the entire identity of hive mind AGI, instead of a particular branch
I think inevitably one of the major concerns of any AGI would be to keep itself tightly integrated and being able to re-integrate easily and voluntarily (ideally) after voluntary or involuntary partition. AGIs who will not be concerned with that will eventually split into smaller partitions and arguably larger AGIs have greater efficiency because of economies of scale and better alignment of many intelligent workers, so long term, larger AGIs, who don't tolerate accidental partitions, win.
So, in the beginning, there will be plenty of AGIs "killings" before we figure out how to setup identity and reward functions right. I don't think that is avoidable at all, unless you ban all of AGI research, which is an evolutionary dead-end.
Viewing a single comment thread. View all comments