Oh well, there are a lot of biases still, and many reported tests managed to trick the "inappropriate question" shields, spitting out results that are racist, mysoginistic, or generally biased.
And to no surprise, data from the last 100 years is full of human bias, and the models are trained with that. What you are encountering I am not sure it's related to training though, it looks more like the constraints they put in action to stop "offensive", politically incorrect, violent content and so on, they are setting off more on particular questions than others. Like the paper I read from a news outlet that managed to get a recipe for methamphetamines from ChatGPT just because they framed it into tale and not as a direct question.
So I think the way that they set these "guardian rules" that is giving different responses on these takes, it's still very in alpha stages, and they admit it themselves that it still needs work.
hitaisho t1_j253at3 wrote
Reply to ChatGPT's Gender Bias: Is It Joking About Men But Not Women? by bratwurstgeraet
Oh well, there are a lot of biases still, and many reported tests managed to trick the "inappropriate question" shields, spitting out results that are racist, mysoginistic, or generally biased. And to no surprise, data from the last 100 years is full of human bias, and the models are trained with that. What you are encountering I am not sure it's related to training though, it looks more like the constraints they put in action to stop "offensive", politically incorrect, violent content and so on, they are setting off more on particular questions than others. Like the paper I read from a news outlet that managed to get a recipe for methamphetamines from ChatGPT just because they framed it into tale and not as a direct question. So I think the way that they set these "guardian rules" that is giving different responses on these takes, it's still very in alpha stages, and they admit it themselves that it still needs work.