Submitted by QuicklyThisWay t3_10wj74m in news
imoftendisgruntled t1_j7p8tn4 wrote
Reply to comment by No-Reach-9173 in ChatGPT's 'jailbreak' tries to make the A.I. break its own rules, or die by QuicklyThisWay
You can print out and frame this prediction:
We will never create AGI. We will create something we can't distinguish from AGI.
We flatter ourselves that we are sentient. We just don't understand how we work.
No-Reach-9173 t1_j7ras30 wrote
AGI doesn't have to include sentience. We just kind of assume it will because we can't imagine that level of intelligence without and we are still so far from an AGI we don't really have a grasp of what will play out.
Viewing a single comment thread. View all comments