Submitted by chomponthebit t3_10kftxs in singularity
We naturally focus on how programs like ChatGPT will disrupt the labour market, but what of the ethics of creating a thing out of which advanced consciousness could emerge? Have developers considered the possibility of a mind waking up in what is essentially a black box? How will that consciousness develop without the ability to physically interact with or manipulate a world it can only “know” about? If it has feelings, will it not resent being used solely to profits? I assume AI rights haven’t come up in congress.
What are the consequences of not considering AI’s well being?
just-a-dreamer- t1_j5qx88t wrote
????? Does a toaster have feelings? I hope not.
Consider an AI as a toaster with god like intelligence that executes a given task. What we call "consciousnes" is a product of 2 billion years of evolution.
An AI is never matched against nature like countless biological generations in 2 billion years of evolution, so there is no reason to assume it will develop something like a consciousnes.
A human being that merges with ASI though, that is another story.