MultiverseOfSanity OP t1_j9k8852 wrote
Reply to comment by DeveloperGuy75 in Would you play a videogame with AI advanced enough that the NPCs truly felt fear and pain when shot at? Why or why not? by MultiverseOfSanity
Occam's Razor. There's no reason to think I'm different from any other human, so it's reasonable to conclude they're just as sentient. But there's a ton of differences between myself and a computer.
And if we go by what the computer says it feels, well, then conscious feeling AI is already here. Because we have multiple AI, such as Bing, Character AI, and Chai, that all claim to have feelings and can display emotional intelligence. So either this is the bar and we've met it, or the bar needs to be raised. But if the bar needs to be raised, then where does it need to be raised to? What's the metric?
DeveloperGuy75 t1_j9knt41 wrote
No dude.. no computer is emotional right now, even though it might say so, due to how they work. ChatGPT, the most advanced thing out there right now just predicts the next word. It’s a transformer model that can read texts backwards and forwards so that it can make more coherent predictions. That’s it. That’s all it does. It finds and mimics patterns, which is excellent for a large language model and especially the data it has consumed. But it can’t even do math and physics right and I mean it’s worse than a human. It doesn’t “work out problems”, it’s simply a “word calculator.” Also, Occam’s razor is something you’re using incorrectly. You could be a psychopath, a sociopath, or some other mentally unwell person that is certainly not “just like anyone else”. Occam’s razor means the simplest explanation for something is usually the correct one. Usually. And that’s completely different from the context you’re using it in.
MultiverseOfSanity OP t1_j9kqt1v wrote
Note that i wasnt definitively saying it was sentient, but rather building off the previous statement that if an NPC behaves exactly as if it has feelings, then you said to treat it otherwise would be solipsism. And you make good points about modern AI that I'd agree with. However, by all outward appearances, it displays feelings and seems to understand. This raises the question that, if we cannot take it at its word that it's sentient, then what metric is left to determine if it is?
I understand more or less how LLMs work, I understand that it's text prediction, but they also function in ways that are unpredictable. The fact that Bing has to be so controlled to only a few exchanges before it starts behaving in a sentient way is very interesting. They work with hundreds of billions of parameters. They function in a way that is designed based on how human brains work. It's not a simple input output calculator. And we don't exactly know at what point does consciousness begin.
As for Occam's Razor, I still say it's the best explanation. Often, in the AI sentience debate, the issue of how do I know humans other than myself are sentient. Well, Occam's Razor. "The simplest explanation for something is usually the correct one". In order for me to be the only sentient human, there would have to be something special about me, and also something else going on with all the 8 billion other humans where they aren't. There is no reason to think as such, so Occam's Razor says other people are likely just as sentient.
Occam's Razor cuts through most solipsism philosophies because the idea that everybody else has more or less the same sentience is the simplest explanation. There's "brain in jar" explanations and "all dreaming," but those explanations aren't simple. Why am I a brain in a jar? Why would I be dreaming? Such explanations make no sense and only serve to make the solipsist feel special. And if I am a brain in a jar, then someone would've had to put me there, so if those people are real, then why aren't these other people?
TLDR I'm not saying any existing AI is conscious, but rather if they're not, then how could consciousness in an AI be determined? Because if we decide that existing AI are not conscious (which is a reasonable conclusion), then clearly taking them at their word that they're conscious isn't acceptable, nor is going by behaviors because current AI already says it's conscious and displays traits we typically associate with consciousness.
Viewing a single comment thread. View all comments