Submitted by QuicklyThisWay t3_10wj74m in news
East-Helicopter t1_j7r1ujx wrote
Reply to comment by Enzor in ChatGPT's 'jailbreak' tries to make the A.I. break its own rules, or die by QuicklyThisWay
>There are good reasons not to do this kind of thing. For one, you might be banned or blacklisted from using AI resources.
By whom?
​
>Also, it forces the researchers to waste time countering the strategy and potentially reducing its usefulness even further.
It sounds more like people doing free labor for them rather than sabotage. Good software testers try to break things.
Viewing a single comment thread. View all comments