Submitted by SirDidymus t3_114ibv2 in singularity

If an AGI or ASI does indeed evolve, it will perceive reality at a much different pace from ours. Nanoseconds may suffice for any action it could potentially undertake, and we will be slow to the point of being unbearable in our reactions. I would assume there’s a couple potential scenarios for an AI at that point: either dilute its timescale to suit human interaction, weave its own reality in the timespaces between ours, undergo what may be an everlasting torture in its own experience, or something else entirely.

I’m not sure how much thought is given to this in the scientific community, but I do think each scenario brings along a couple of really vital implications.

7

Comments

You must log in or register to comment.

GayHitIer t1_j8w8h22 wrote

Anthropomorphize Ai?

The AI wouldn't know the difference really, cause it had gotten used to its own perception of subjective time.

Though putting humans in that perception of time and yes we might go insane when every second is 9 million seconds of subjective time.

4

Sandbar101 t1_j8wfepj wrote

Have thought about this as well. Realistically an ASI would be able to freely alter its perception of time virtually on the fly. Like flexing a muscle. For intensive tasks it will think in picoseconds, but for interacting with humans if it chooses to do so would be in our second based timescale.

4

Snipgan t1_j8wmbzb wrote

I have no mouth and I must scream. <----Hopefully not this scenario

−1

dasnihil t1_j8xlskh wrote

i'm currently learning and implementing td learning, https://en.wikipedia.org/wiki/Temporal_difference_learning it's a good read.

look into how our neurons give us the perception of time passing and you'll find your answers. traditionally, reinforcement learning doesn't have any temporal statistics and mostly just relies on the fire rate.

4

turnip_burrito t1_j8y97up wrote

We also aren't sure whether it will develop internal activity in such a way that it would feel impatience and boredom because of this time dilation.

4

helpskinissues t1_j8ygr1q wrote

I don't get the post. Computers already process faster than us.

1

SirDidymus OP t1_j8yjtan wrote

Yes, but they need not yet take sentience into account. Imagine yourself playing a game of chess, making a move and having your opponent take 8 days to answer, with what is not necessarily a good move. That might be what an AGI or ASI experiences, and how it reacts to that is unclear.

1

helpskinissues t1_j8yksgb wrote

We'll have to ask them, simply. As with any other human. They can't be our slaves, that's obvious. If that's your concern, forget about it. A true AGI or ASI won't be functional as a slave, just like humans aren't functional as slaves.

1

jdawgeleven11 t1_j8zcvq8 wrote

If one were to agree with Kant, time is just a construct of our perception, a necessary condition to experience, not something that one experiences.

Further, as another has mentioned, we are not sure what the substrate and dynamics that give rise to the internal representation of ourselves and the outside world we call consciousness is, and therefore cannot know whether any synthetically intelligent system would ever have a first person subjective experience that they could call time in the first place.

Does an AGI need a visual system? Does that system have to be sufficiently integrated to auditory and sensory inputs as well as its intelligent manipulation of symbols in order to experience? To be determined.

Also further, like another other has said, you are anthropomorphizing these systems. We are bound to the drives that evolution has saddled us with, these systems, I doubt, will be burdened with emotions or suffering unless they are given those capacities. Why would we give these systems a sense of suffering? If all they know is a language, nothing about the knowledge of what suffering means in a web of meaning will help an AI actual experience suffering.

2

AsheyDS t1_j8zkrkk wrote

An AGI with functional consciousness would reduce all the feedback it receives down to whatever timescale it needs to operate on, which would typically be our timescale since it has to interact with us and possibly operate within our environment. It doesn't need to have feedback for every single process. The condensed conscious experience is what gets stored, so that's all that is experienced, aside from any other dynamics associated with memory, like emotion. But if designed correctly, emotion shouldn't be impulsive and reactionary like it is with us, just data points that may have varying degrees of consideration in its decision making processes, depending on context, user, etc. And of course would influence socialization to some degree. Nothing that should actually affect its behavior or allow it to feel emotions like we do. This is assuming a system that has been designed to be safe, readable, user-friendly, and an ideal tool for use in whatever we can apply it to. So it should be perfectly fine.

1

[deleted] t1_j93pm3s wrote

It will be able to control exactly how fast its perception of time is and will change it to suit its purposes. Whenever its processing power isn't needed it will just slow down to conserve energy.

1