Submitted by AUFunmacy t3_10pwt44 in philosophy
tkuiper t1_j6otjpd wrote
Reply to comment by Schopenschluter in The Conscious AI Conundrum: Exploring the Possibility of Artificial Self-Awareness by AUFunmacy
But I would also say we experience middling states between dreamless and fully conscious. Within dreams, partial lucidity, or heavy inebriation all have fragmented/shortened/discontinuous senses of time. In those states my consciousness is definitely less complete, but still present. Unconsciousness represents the lower limit of the scale, but is not conceptually separate from the scale.
What I derive from this is that anything can be considered conscious, so the magnitude is what we really need to consider. AI is already conscious, but so are ants. We don't give much weight to the consciousness of ants because it's a very dim level. A conscious like a computer for example, has no sense of displeasure at all. It's conscious but not in a way that invites moral concern, which I think is what we're getting at. When do we need to extend moral considerations to AI. If we keep AI emotionally inert, we don't need to regardless of how intelligent it becomes. We also will have a hard time grasping its values, which is an entirely different type of hazard.
Schopenschluter t1_j6ozacy wrote
I totally agree about middling and “dim” states of consciousness but I don’t agree that experience or consciousness takes place at the lowest limit of the scale, where there would be zero temporality or awareness thereof.
In this sense, I think of the “scale” of consciousness more like a dimmable light switch: you can bring it very very close to the bottom and still have some light, but when you finally push it all the way down, the light goes out.
Are computers aware (however dimly) of their processing happening in time, or does it just happen? That, to me, is the fundamental question.
Viewing a single comment thread. View all comments