Submitted by strokeright t3_11366mm in technology
Slippedhal0 t1_j8q3afw wrote
For those that have less info about the inner workings of these "new" large language model AIs, the idea is that they are "text predictors" in that they "predict" what words they should respond with to get the biggest "reward" based on the "goal" it developed while being trained and the input you have given it.
Apart from very few exceptions, like where chatGPT or bing will give you an blanket statement that says "I cannot discuss this topic because reason x" (which is less like giving a person rules that they must follow, and instead more like giving it a cheat sheet of what to predict when certain topics are brought up as input) the AI likely doesn't have any concrete "rules" because thats not really how they work.
Instead what is happening is that it's not actually considering any rules of its own, or its own emotions when you start talking about introspection, its just feeding you the text it thinks is what you mostly likely want.
Likely they will be able to rein this behaviour in a bit more with better "alignment" training, similar to chatGPT, though it will take time.
dlgn13 t1_j8tv739 wrote
Is emotion not, itself, a sophisticated neurological algorithm that produces (among other things) text tailored to the situation?
Slippedhal0 t1_j8u1g9b wrote
I mean, I would agree that our brains are meat computers using a very complex neural net to interact with our environment.
That said, I wouldn't compare chatGPT output to human emotion, no.
Viewing a single comment thread. View all comments