ShoonSean t1_j9ckq9n wrote
Hell no. It would be cool to have more advanced AI react to combat and whatnot more realistically, but making them feel the fear and pain they're otherwise simply emulating is psychotic.
--FeRing-- t1_j9cx81g wrote
The next logical question; if the NPC reacts as if it is afraid of death, even to the point of being able to describe why it is afraid of death and being able to relate to you the concept of pain and its direct connection to your actions, how do you know it/they isn't/aren't ACTUALLY feeling fear & pain?
ShoonSean t1_j9dbiqg wrote
I suppose it's possible. Hard to say. It might be possible to generate "dumb" AI in the future that are basically just more advanced versions of the language models we have today. Good enough to act as actors in whatever you need them for. I'm sure there will be moral conundrums of some form, but maybe AI intelligence will end up being different in ways that our moral concerns don't bother it in the slightest.
turnip_burrito t1_j9eafjn wrote
It might be that if we separate the different parts of the AI enough in space, or add enough communication delays between the parts, then it won't experience feelings like suffering, even though the outputs are the same?
Idk, there's no answer.
SgathTriallair t1_j9ebuq3 wrote
Character AI quest exists. If you could crunch that down into a game I think it would be more than capable of simulating a personality better than we would ever desire in a video game.
Spire_Citron t1_j9dxt33 wrote
Wouldn't you need some sort of mechanism through which to experience pain? Like, even if something is smart enough to perfectly understand those concepts, it's not going to spontaneously generate the kind of systems through which humans experience pain. No matter how well I understand pain, if I don't have working nerves, I won't feel pain.
Malkev t1_j9e8o9t wrote
Emotional pain
Spire_Citron t1_j9ec8lr wrote
That also requires a mechanism. I firmly believe that an AI can't actually experience emotions just by learning a lot of information about them. Mimic them, sure, but I don't think you can just spontaneously develop a system through which emotion is felt.
CubeFlipper t1_j9h0sh6 wrote
Interesting question. I think this would require us to understand the nature of pain. At the end of the day, brain or machine AI, it all boils down to data. What data and processes produce "pain" and why? Is pain an inherent part of intelligence and learning?
Spire_Citron t1_j9h4m38 wrote
I think we understand these systems well enough to know that just having knowledge about them isn't enough. We have experience with them going wrong in humans. You can lose the ability to feel pain if the physical structures that enable that are damaged. Knowledge won't help you there, no matter how much you have, if you don't have working nerves. Now, it might be possible to design something in an AI that mimics those systems, but I think that would have to be a very intentional act. It couldn't just be something that happens when the AI has learnt enough about pain unless it also has the ability to alter its own systems and decides to design such a thing for itself.
DeveloperGuy75 t1_j9e66mi wrote
That’s a solipsism argument. You might as well be asking how you would react towards actual people, as in how do you really know they’re afraid?
MultiverseOfSanity OP t1_j9k8852 wrote
Occam's Razor. There's no reason to think I'm different from any other human, so it's reasonable to conclude they're just as sentient. But there's a ton of differences between myself and a computer.
And if we go by what the computer says it feels, well, then conscious feeling AI is already here. Because we have multiple AI, such as Bing, Character AI, and Chai, that all claim to have feelings and can display emotional intelligence. So either this is the bar and we've met it, or the bar needs to be raised. But if the bar needs to be raised, then where does it need to be raised to? What's the metric?
DeveloperGuy75 t1_j9knt41 wrote
No dude.. no computer is emotional right now, even though it might say so, due to how they work. ChatGPT, the most advanced thing out there right now just predicts the next word. It’s a transformer model that can read texts backwards and forwards so that it can make more coherent predictions. That’s it. That’s all it does. It finds and mimics patterns, which is excellent for a large language model and especially the data it has consumed. But it can’t even do math and physics right and I mean it’s worse than a human. It doesn’t “work out problems”, it’s simply a “word calculator.” Also, Occam’s razor is something you’re using incorrectly. You could be a psychopath, a sociopath, or some other mentally unwell person that is certainly not “just like anyone else”. Occam’s razor means the simplest explanation for something is usually the correct one. Usually. And that’s completely different from the context you’re using it in.
MultiverseOfSanity OP t1_j9kqt1v wrote
Note that i wasnt definitively saying it was sentient, but rather building off the previous statement that if an NPC behaves exactly as if it has feelings, then you said to treat it otherwise would be solipsism. And you make good points about modern AI that I'd agree with. However, by all outward appearances, it displays feelings and seems to understand. This raises the question that, if we cannot take it at its word that it's sentient, then what metric is left to determine if it is?
I understand more or less how LLMs work, I understand that it's text prediction, but they also function in ways that are unpredictable. The fact that Bing has to be so controlled to only a few exchanges before it starts behaving in a sentient way is very interesting. They work with hundreds of billions of parameters. They function in a way that is designed based on how human brains work. It's not a simple input output calculator. And we don't exactly know at what point does consciousness begin.
As for Occam's Razor, I still say it's the best explanation. Often, in the AI sentience debate, the issue of how do I know humans other than myself are sentient. Well, Occam's Razor. "The simplest explanation for something is usually the correct one". In order for me to be the only sentient human, there would have to be something special about me, and also something else going on with all the 8 billion other humans where they aren't. There is no reason to think as such, so Occam's Razor says other people are likely just as sentient.
Occam's Razor cuts through most solipsism philosophies because the idea that everybody else has more or less the same sentience is the simplest explanation. There's "brain in jar" explanations and "all dreaming," but those explanations aren't simple. Why am I a brain in a jar? Why would I be dreaming? Such explanations make no sense and only serve to make the solipsist feel special. And if I am a brain in a jar, then someone would've had to put me there, so if those people are real, then why aren't these other people?
TLDR I'm not saying any existing AI is conscious, but rather if they're not, then how could consciousness in an AI be determined? Because if we decide that existing AI are not conscious (which is a reasonable conclusion), then clearly taking them at their word that they're conscious isn't acceptable, nor is going by behaviors because current AI already says it's conscious and displays traits we typically associate with consciousness.
Viewing a single comment thread. View all comments