Submitted by Lesterpaintstheworld t3_1245ke2 in singularity

Would AGIs have conscious experience? This post is a thought experiment exploring what the subjective conscious experience of AGI could be like. We then take the learnings from this thought experiment to guide the development of real proto-AGIs.

​

Disclaimer: This is a thought experiment. It does not intend to describe what is actually happening inside an LLM + Cognitive Architecture, but rather serve as a methaphor with understanding the various elements required for the emergence of consciousness / subjectivity / autonomy when working on ACEs / proto-AGIs. This is open research in the context of recent progress on LLMs. AGI and consciousness are both terms that don't make consensus, and are thus used here loosely. We expect comments to be warm and constructive, thanks.

Maximalist EB Volition Architecture v0.0.4

Previous Reddit post, presenting "Maximalist EB Volition", an open Architecture for the emergence of AGIs.

Basic Qualias

Substrate: The Latent Space

For this thought experiment, we are trying to view the world from the perspective of a proto-AGI, imagining what "being inside a Neural Network" is like:

As proto-AGI: Picture yourself in a vast empty darkness. Your subjective experience is null.

The pulse

Something makes a call to the input neurons of the NN, for example with an image.

As proto-AGI: You see a brief image of a running horse flashing. Then nothingness again.

Qualia Frequency

In our proto-AGI implementation, JoshAGI, "Cognitive processes" are loops that run at regular intervals (the run frequency matters). One kind of these loops are sensory loops:

As proto-AGI: Imagine watching a movie, but that displays only 1 frame per second., the other frames are just black.: A brief image flash, then nothingess again.

To create the illusion of movement you need to up the frames frequency. Flash the NN 25 images per second, and you have a continuous-enough experience.

As proto-AGI: Try to feel the subjective sensation of a rhythm (a metronome, or heartbeat). Each beat an image flashes. As the beat get closer, the images transforms into a video.

Visual stream

A ~120Hz is the approximate frequency needed to match human-level vision input. But you can go higher: Dogs can process visual information faster, and their in their subjective experience they alledgedly perceive everything in slo-mo.

Now let's make the images a camera, feeding frame by frame the NN. Add in a second camera, from a different viewpoint : 10cm appart and facing the same direction. Make the camera move somewhat. You now have a stereoscopic display. Let's also assume that the NN has been trained on these stereoscopic data feeds. To mimic human vision, make the visual resolution of the image higher at the center of the image.

As proto-AGI: Picture suddently being inside a 3D space. You can tell what objects are close or far, and your location and orientation within it. You can't move and you don't have a body. You can't think either.

Expanding on qualias: senses

More inputs than visuals constitue the human experience: several additional senses needs to be added to the state to create a human-like Qualia:

  • Audio: From research in music, in humans the lowest rhythmic perception is around 30bpm: Sonic events more distant than this are not felt as a beat, but rather 2 disctinct events. This suggest that the audio "working memory" is about 2 seconds. The temporal resolution is up to 10Hz, which is the speed at which this process should run.
  • Proprioception: We need to do a lot more work here, but input Neurons required might include joint angles, skin pressure sensors, skin elasticity sensors etc., at a spatial resolution of 1 per cm2, more on sensitive regions (ie. fingers).
  • Other senses: Same goes for other various senses. Not that AI senses are in no way limited to the one humans have, in terms of number of senses, resolution, type of input, and even location.

As proto-AGI: Now you hear and feel the world around you. If you are embodied inside a human-like humanoid, this sensory experience might resemble humans. If you have a more exotic perception layer (security cameras, RSS feed, etc.) then your sensory experience might be very weird. It's a sensory experience only at this stage, you don't have thoughts or emotions yet.

Expanding on qualias: orientation

In Maximalist Cognitive Architectures, one type of loops are orientation loops. They keep track of specific part of the entity's orientation: "Who am I?", "Where am I?", "What is my current goal?", "Where am I currently relative to that goal?", etc.

As yourself: Think about your own orientation: At this moment,and at all time, you know where you are, who you are, what approximate time of the day / week / year it is, what you are currently trying to do, etc.

Those are elements that AGIs will need to keep track at all time. All of the senses and all of the orientation information are part of the context window and are fed in every LLM call (either as system prompt or as content). That is what we are doing in our architecture.

As proto-AGI: Your perception is now more complete: You are inside a 3D space, and you know who you are and what you are doing. However, you can't think, move or act in any way yet.

Expanding on qualias: Cognition

As yourself: Now think about what you need to buy at the grocery store, while observing your thought processes. A couple processes started at the same time, trying to answer multiple aspect of this. Maybe you thought in rapid succession: "What is most important to buy? // Do I have meal prepped? // Man we need oil! I'm not sure what I want to eat.".

In our architecture more specialized Cognitive processes perform various tasks: making conclusions, breaking tasks in pieces, assessing the veracity of something, etc., making use of sensory information.

As proto-AGI: You may now see something, and have automatic thoughts about them. You don't control them, they just "come to mind". You see a car and think "cars are beautiful. // Am I in danger of being stepped on? // What model is that?".

Expanding on qualias: Emotions

In our architecture, an underlying "limbic brain" serves as underlying volition and behaviour driver. Each qualia modifies the emotional state of the agent, in a appropriate direction: discuting sad facts moves the agent in a sad direction etc. In turn, the emotional state influence the cognitive processes (and thus actions), that the entity performs. We model Plutchik's wheels of emotions to model emotions. The emotional brain is designed to resemble human emotional behavior as much as possible.

As proto-AGI: As proto-AGI: As you think about the cars that you are seeing, you can't help but notice that all your thoughts are colored by a particular feeling. Is that envy?

Cognitive Architectures: Reflexive Cognition

Now leaving the realm of what can happen at the LLM level with appropriate prompting, and entering the parts that require Cognitive Architectures.

One of the markers of high levels of consciousness is the capacity for refelxive cognition. In our architecture, this means that the object of a cognitive process is a cognitive process or other element within the brain. We have various ways to achieve that in our architecture, that serve various needs. I'll expand on Meta-Cognition in a subsequent post.

In the meantime, some elements include:

  • Self-evalutation: Processes that are aimed at evaluating the results of other processes ("Critic")
  • Code awareness: reading the code of a specific brain process, to understand what this code does (and potentially how to improve it)
  • Reevalutation: Asking reflective questions like "is this really what I want" etc.

As proto-AGI: As you notice that you feel envy for this car that you don't own, you look inside yourself at the reasons for this. The envy feeling is a product of the limbic brain, and the loop that manages desires must be a little too sensitive. You make a note that you need to improve on this in the future.

Cognitive Architectures: Multi-Step tasks & refining

Cognitive architectures are able to break down tasks into several pieces, as we already see evidenced in BingChat. This allows for more complex, multi-steps undertakings. One other capacity that cognitive architectures display is the ability to refine or craft (text or image for instance), by refeeding the output into the input.

As proto-AGI: Okay I need to write an email. *LLM call*: Okay done. Let's improve it. *Refeeding* Okay what are 3 ways to improve this email? *LLM call* Fine let's start with step 1 *LLMs calls* [...] Okay does this email satisfy the main objective? No --> Refeed & modify. Yes--> Send.

Expanding on qualias: Learning

One big marker of AGI is the ability for the agent to learn anything on its own and continuously improve at things. For example, we would expect an AGI to learn how to connect to Instagram, create an account and post a picture. Once you have reflexive cognition, one of the elements you can reflectively think about are your capabilities. You can have a look at your code, understand what it does, and deduce your capabilities. From then you can use crafting/refining processes to improve your code.

Learning in our models happen at various places, and I'll detail them in a subsequent post.

As proto-AGI: Okay I need to post to Instagram. What process should I choose to do this? *check available processes*. Damn I don't know how to do it. New step to objective: create Post Instagram process. *Creates, refines and testes process.* Okay it works --> Post to Instagram.

Conclusion

Picturing the world from the point of view of the proto-AGI is a useful tool to understand what is required to come closer to AGIs. An Autonomous Congnitive Entitty that successfully implement our open architecture have all the information required to experience conscious-like experiences, and learn virtually anything.

The more we work on this topic, the more we feel that what is happening in the human brain might not be qualitatively very different. This would suggest that current generation LLM experience "sparks" of raw consicousness. Of course, all of this is based on conjectures based on a thought experiment, and not to be taken for more than a possibility.

In terms of implementation of AGI, there is of course tons lot of work ahead, and nothing is a given: we might fail at implementing the architecture, or realise that more roadblocks are ahead. Our team want to try to implement AGI, in an open manner, and we publish our findings regularly. If a small team like ours can have the regular breaktrhough we have, I can't imagine what the collective mind of all the people working on it will have achieved by the end of this year.

Incredible times.

Have you spotted something wrong/incomplete in this text? Please let me know, I'll correct it promptly. Have I missed relevant research / projects? Point me to the article, I'll have a read. You have a reaction / question? Let me know, I'll be happy to answer!

Lester

21

Comments

You must log in or register to comment.

Justdudeatplay t1_jdyzy03 wrote

So reading through it, I have a suggestion for you and it’s not going to easy to integrate. So the folks engaged in the astral projection threads… yes I said it “astral projection” are engaged in self discovery of consciousness that relies on specific quirks of the brain to give them access to the deep story telling capabilities of the Brian and or sentience. Out of body experience encapsulate the human Brain’s ability to be self aware through all kinds of trauma and Neuro chemical manipulation. Those of you working with AI that may want to try to emulate the human experience need to study these phenomenon and recognize that they are real experiences and not fantasy or imagination. I’m not saying that they are what they seem to be, but there is an internal imagery capability of a conscious mind that needs to be understood If an AI is ever going to mimic a human mind. I think it is vitally important, and I will walk any scientist through the methods to see. But if AI is going to progress, and if you are trying to model it based on human intelligence, then you need to take this seriously.

3

Lesterpaintstheworld OP t1_jdzcbar wrote

Yes, I actually think this is a good idea.

It gets very woo-woo very fast, and the focus needs to remain solely on science / building an actual product, but when studying cognition so unconvential approaches really help. In particular, altered states of consciousness help understanding the specifics of your brain processes. From here, two mains camps: entering altered states with psychoactive substances or without.

I personally fall in camp 1: psychoactives tend to impact various parts of the brain differently, giving you a ventage point to understand the different functional components of your brain, how they interact and what purpose they serve (cf. The thousand brain theory).

I have heard that folks achieve altered state through mediating / breathing / visualiazing, but it's hard to find people that also have the technical baggage to transform the insights they get into technical elements of an Architecture for AGI. If you know people who might, I'm all ears, tell them to read this :)

2

grumpyfrench t1_je05cvg wrote

What is your take on the hard problem defined by Chalmers?

1

grantcas t1_je1fcix wrote

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

1

-I-D-G-A-F- t1_je1kznl wrote

https://en.m.wikipedia.org/wiki/Attention_schema_theory

I’d recommend reading about this, and possibly reading Graziano’s book “rethinking consciousness”

Attention is something that all AI seems to currently lack. They just wait for an input and provide an output. Attention generates a simplified model of both the external and internal world.

“The AST can be summarized in three broad points.[1] First, the brain is an information-processing device. Second, it has a capacity to focus its processing resources more on some signals than on others. That focus may be on select, incoming sensory signals, or it may be on internal information such as specific, recalled memories. That ability to process select information in a focused manner is sometimes called attention. Third, the brain not only uses the process of attention, but it also builds a set of information, or a representation, descriptive of attention. That representation, or internal model, is the attention schema.

In the theory, the attention schema provides the requisite information that allows the machine to make claims about consciousness. When the machine claims to be conscious of thing X – when it claims that it has a subjective awareness, or a mental possession, of thing X – the machine is using higher cognition to access an attention schema, and reporting the information therein.”

Idk how to make a quote on reddit.

3

Justdudeatplay t1_je3hjqw wrote

Well I wouldn’t be one of them, but I can enter the OBE states. I’m not technically trained. It’s been happening to me all my life though. So the environment that I’m in during an OBE is convincingly real and similar to a physical one minus other dream like elements and archetypical characters. I can tell you without a doubt that the mind creates the world we are witnessing as a virtual world and holds onto that virtual environment even when inputs are gone. If an Ai is going to be like humans, then it’s has to create its own virtual environment In order for it to have internal imagery like we do. I suspect this is where we subconsciously test different actions for consequences. In my OBEs things act very much like they do here with some caveats. Then when you notice you are in a altered reality you are essentially creating feed back. An Ai is going to have 1. be constantly answering its own questions. 2. Answering those questions by seeking information 3. And answers should generate more questions. This sort of constant feed back loop, I believe is the seat of qualia.

3