Submitted by Malachiian t3_12348jj in Futurology
Surur t1_jdur5b3 wrote
Reply to comment by 4354574 in Microsoft Suggests OpenAI and GPT-4 are early signs of AGI. by Malachiian
Sure, but my point is that while you may be conscious, you can not really objectively measure it in others, you can only believe when they say it or not.
So when the AI says it's conscious....
audioen t1_jdw2frs wrote
The trivial counterargument is that I can write a python program that says it is conscious, while being nothing such, as it is literally just a program that always prints these words.
It is too much of a stretch to regard a language model as conscious. It is deterministic -- it always predicts same probabilities for next token (word) if it sees the same input. It has no memory except words already in its context buffer. It has no ability to process more or less as task needs different amount of effort, but rather data flows from input to output token probabilities with the exact same amount of work each time. (With the exception that as input grows, its processing does take longer because the context matrix which holds the input becomes bigger. Still, it is computation flowing through the same steps, accumulating to the same matrices, but it does get applied to progressively more words/tokens that sit in the input buffer.)
However, we can probably design machine consciousness from the building blocks we have. We can give language models a scratch buffer they can use to store data and to plan their replies in stages. We can give them access to external memory so they don't have to memorize contents of wikipedia, they can just learn language and use something like Google Search just like the rest of us.
Language models can be simpler, but systems built from them can display planning, learning from experience via self-reflection of prior performance, long-term memory and other properties like that which at least sound like there might be something approximating a consciousness involved.
I'm just going to go out and say this: something like GPT-4 is probably like 200 IQ human when it comes to understanding language. The way we test it shows that it struggles to perform tasks, but this is mostly because of the architecture of directly going prompt to answer in a single step. The research right now is adding the ability to plan, edit and refine the replies from the AI, sort of like how a human makes multiple passes over their emails, or realizes after writing for a bit that they said something stupid or wrong and go back and erase the mistake. These are properties we do not currently grant our language models. Once we do, their performance will go through the roof, most likely.
4354574 t1_jdwkos3 wrote
Well, I don’t believe consciousness is computational. I think Roger Penrose’s quantum brain theory is more likely to be accurate. So if an AI told me it was conscious, I wouldn’t believe it. If consciousness arose from complexity alone, we should have signs of it in all sorts of complex systems, but we don’t, and not even the slightest hint of it in AI. The AI people hate his theory because it means literal consciousness is very far out.
Surur t1_jdwqof7 wrote
> If consciousness arose from complexity alone, we should have signs of it in all sorts of complex systems
So do you believe animals are conscious, and if so, which is the most primitive animal you think is conscious, and do you think they are equally conscious as you?
4354574 t1_jdx1c88 wrote
If you want to know more about what I think is going on, research Orchestrated Objective Reduction, developed by Penrose and anaesthesiologist Stuart Hameroff.
It is the most testable and therefore the most scientific theory of consciousness. It has made 14 predictions, which is 14 more than any other theory. Six of these predictions have been verified, and none falsified.
Anything else would just be me rehashing the argument of the people who actually came up with the theory, and I’m not interested in doing that.
Viewing a single comment thread. View all comments