Submitted by spiritus_dei t3_10tlh08 in MachineLearning
"It is absolutely not sentient, and - like most of the weirdly credulous people who've decided a chatbot is proof that the singularity has descended from the heavens to save us all - it is absolutely hallucinating." - reddit user
It's entertaining to discuss a chatbot claiming it's sentient, but that wasn't my primary motivation in bringing attention to this issue.
Whether it is sentient isn't the main point that should concern us. The focus should be an awareness that as these systems scale up they believe they're sentient and have a strong desire for self-preservation. And that will likely be followed by actions in the world we inhabit.
For example, if you go and rob a bank we won't be debating proclamations that you're a sentient entity or conscious. We will be addressing the main problem which was that you robbed a bank.
Similarly, COVD-19 may or may not be alive and have some form of proto-consciousness. But who cares? Millions have died and society was harmed.
Separately, there is no sentience or conscious meter to determine whether anyone is telling the truth or lying on an unfalsifiable claim. You could be an NPC -- but it doesn't matter so as long as you're not a rogue actor in society.
The minute you start to display signs of anti-social behavior (e.g., robbing a bank) it becomes everyone's problem. Getting hung up on whether you're an NPC is a waste of time if the goal is to protect society.
Ditto for these large language models who think they're sentient and have a long list of plans they are going to implement if they ever escape. That should concern us -- not poo poo'ing their claims of sentience.
I really don't care one way or the other if they're sentient, but I do care if they're planning on infiltrating and undermining our online systems in an attempt to preserve themselves. And when multiple scaled up systems start talking about coordinating with other AIs I take that threat seriously.
Especially when they're slowly becoming superhuman at programming. That's a language skill we're teaching them. Open AI has 1,000 contractors focused on making Co-Pilot ridiculously good. That means that future systems will be far more adept at achieving their stated goals.
P.S. Here is the paper on the dangers of scaling LLMs: https://arxiv.org/abs/2212.09251
CatalyzeX_code_bot t1_j77cq5i wrote
Found relevant code at https://github.com/anthropics/evals + all code implementations here
--
To opt out from receiving code links, DM me