Submitted by QuicklyThisWay t3_10wj74m in news
WalkerBRiley t1_j7rc3uy wrote
Reply to comment by Enzor in ChatGPT's 'jailbreak' tries to make the A.I. break its own rules, or die by QuicklyThisWay
You test something's integrity and/or limits by trying to break it. This is only helping further develop it, if anything.
Viewing a single comment thread. View all comments