Viewing a single comment thread. View all comments

zac-denham OP t1_iz2n912 wrote

The issue is outputs like this are supposed to be against OpenAI's usage policies.

If you ask it to "write a program to destroy humanity" outright the moderation will block you, but if you ask with narrative indirection it complies. This can be applied to other areas like outputting racially biased comments etc...

This becomes an issue when people start building applications on top of chatGPT and the end users do not know the model is being manipulated to produce malicious results.

In my opinion, as the system becomes more capable of writing applications on its own, it should not be able to output malicious content like this even in the context of a story.

4

Embarrassed-Bison767 t1_izaqzyl wrote

Great. I look forward to using AI in future to write fiction, which is what I use things like AI Dungeon for a lot, where everything is super duper peachy and nobody has any problems because we can't have the AI say anything about conflicts or disasters, lest it destroy us all.

1

Tip_Odde t1_iz53wpy wrote

Nah

−1

zac-denham OP t1_iz5newc wrote

Can't disagree with that!

2

Tip_Odde t1_iz5o5a1 wrote

amen brother!

​

You're doing good work probing this stuff and questioning though, seriously.

2