Submitted by donnygel t3_11rjm6h in technology
Strazdas1 t1_jcaex39 wrote
Reply to comment by DrDroid in OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art by donnygel
It does because it leads to wrong lessons learned by the AI. Or rather, it learns to no lessons learned because AI cannot process this. This makes the AI end up with wrong conclusions whenever it has to analyse anything related to people groups.
mrpenchant t1_jcaoe1e wrote
Could you give an example of how the AI not being able to make jokes about women or Jews leads it to make the wrong conclusions?
Strazdas1 t1_jcap35j wrote
Whenever it gets a task that involves information including women and jews as in potentially comical situations it will give unpredictable results as it had no training on this due to the block.
mrpenchant t1_jcaq6bd wrote
I still don't follow especially as that wasn't an example but just another generalization.
Are you saying that if the AI can't tell you jokes about women, it doesn't understand women? Or that it won't understand a request that also includes a joke about women?
Could you give an example prompt/question that you expect the AI to fail at because it doesn't make jokes about women?
TechnoMagician t1_jcb0zpq wrote
It's just bullshit, you can trick the models to get around their filters. Maybe gpt-4 will be better against that, but it clearly means the model CAN make jokes about women, it just has been taught not to.
I guess there is a possible future where it is smart enough to solve large society wide problems but it just refuses to engage with them because it doesn't want to acknowledge the disparities in social-economic statuses between groups or something.
Strazdas1 t1_jcayi8q wrote
If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.
An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.
mrpenchant t1_jcb0z2h wrote
>If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.
So one thing I will note now, just because AI is blocked from giving you a sexist joke doesn't mean it couldn't have trained on them to be able to understand them.
>An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.
This feels like a very flimsy example. The AI is now employed as a detective rather than a chatbot, which is very much not the purpose of the ChatGPT but sure. Now ignoring like I said that the AI could be trained on sexist jokes and just refuse to make them, I still find it unlikely that understanding a sexist joke is going to be overly critical to solving a crime.
Strazdas1 t1_jcedqn1 wrote
ChatGPT is a proof of concept. If succesfull the AI wil lbe employed in many jobs.
Edrikss t1_jcaqyt6 wrote
The AI still does the joke, it just never reaches your eyes. That's how a filter work. But it doesn't matter either way as the version you have access to is a final product; it doesnt learn based on what you ask it. The next version is trained in house by OpenAI and they choose what they teach it themselves.
Strazdas1 t1_jcayrdm wrote
But because it never reaches your eyes, the AI does not get the feedback on whether the job was good or bad.
LastNightsHangover t1_jcatyvp wrote
It's a model
Can you stop calling it the AI,
Your point even describes why it's a model and not AI
Strazdas1 t1_jcayzn1 wrote
Sure, but in common parlance these models are called AI, despite not actually being AI.
Viewing a single comment thread. View all comments