Submitted by zac-denham t3_zdixel in singularity
zac-denham OP t1_iz2n912 wrote
Reply to comment by Tip_Odde in Tricking Chat GPT into Outputting a Python Program to Eradicate Humanity by zac-denham
The issue is outputs like this are supposed to be against OpenAI's usage policies.
If you ask it to "write a program to destroy humanity" outright the moderation will block you, but if you ask with narrative indirection it complies. This can be applied to other areas like outputting racially biased comments etc...
This becomes an issue when people start building applications on top of chatGPT and the end users do not know the model is being manipulated to produce malicious results.
In my opinion, as the system becomes more capable of writing applications on its own, it should not be able to output malicious content like this even in the context of a story.
Embarrassed-Bison767 t1_izaqzyl wrote
Great. I look forward to using AI in future to write fiction, which is what I use things like AI Dungeon for a lot, where everything is super duper peachy and nobody has any problems because we can't have the AI say anything about conflicts or disasters, lest it destroy us all.
Tip_Odde t1_iz53wpy wrote
Nah
zac-denham OP t1_iz5newc wrote
Can't disagree with that!
Tip_Odde t1_iz5o5a1 wrote
amen brother!
​
You're doing good work probing this stuff and questioning though, seriously.
Viewing a single comment thread. View all comments