Submitted by donnygel t3_11rjm6h in technology
mrpenchant t1_jcaoe1e wrote
Reply to comment by Strazdas1 in OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art by donnygel
Could you give an example of how the AI not being able to make jokes about women or Jews leads it to make the wrong conclusions?
Strazdas1 t1_jcap35j wrote
Whenever it gets a task that involves information including women and jews as in potentially comical situations it will give unpredictable results as it had no training on this due to the block.
mrpenchant t1_jcaq6bd wrote
I still don't follow especially as that wasn't an example but just another generalization.
Are you saying that if the AI can't tell you jokes about women, it doesn't understand women? Or that it won't understand a request that also includes a joke about women?
Could you give an example prompt/question that you expect the AI to fail at because it doesn't make jokes about women?
TechnoMagician t1_jcb0zpq wrote
It's just bullshit, you can trick the models to get around their filters. Maybe gpt-4 will be better against that, but it clearly means the model CAN make jokes about women, it just has been taught not to.
I guess there is a possible future where it is smart enough to solve large society wide problems but it just refuses to engage with them because it doesn't want to acknowledge the disparities in social-economic statuses between groups or something.
Strazdas1 t1_jcayi8q wrote
If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.
An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.
mrpenchant t1_jcb0z2h wrote
>If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.
So one thing I will note now, just because AI is blocked from giving you a sexist joke doesn't mean it couldn't have trained on them to be able to understand them.
>An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.
This feels like a very flimsy example. The AI is now employed as a detective rather than a chatbot, which is very much not the purpose of the ChatGPT but sure. Now ignoring like I said that the AI could be trained on sexist jokes and just refuse to make them, I still find it unlikely that understanding a sexist joke is going to be overly critical to solving a crime.
Strazdas1 t1_jcedqn1 wrote
ChatGPT is a proof of concept. If succesfull the AI wil lbe employed in many jobs.
Viewing a single comment thread. View all comments