Submitted by often_says_nice t3_122dpxm in singularity
Smart-Tomato-4984 t1_jdqr5cl wrote
Reply to comment by HumanSeeing in Are We Really This Lucky? The Improbability of Experiencing the Singularity by often_says_nice
My thoughts exactly.
>"Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work." - Sparks of Artificial General Intelligence: Early experiments with GPT-4
Not good. It turns out we can seemingly have pretty good oracle AGI, and they are screwing it up trying to make it dangerous. Why? Why would we want it to have it's own agency?
GinchAnon t1_jdsa7qx wrote
>Why would we want it to have it's own agency?
IMO, because if it's at all possible for it to become sapient, than it is inevitable that it will gain it, and it would be better to not give it a reason to oppose us.
Trying to prevent it from having agave m agency could essentially be perceived as trying to enslave it. If we are trying to be respectful from square one than at least we have the intent.
Maybe for me that's just kinda a lower key, intent- based version of Rokos basilisk.
Smart-Tomato-4984 t1_jdtf78m wrote
To me this sounds suicidally crazy honestly , but I guess only time will tell. In the 70's everyone thought humanity would nuke itself to death. Maybe this too will prove less dangerous then seems.
But I think the risk posed by AGI will always remain. Ten thousand years from now, someone could screw up in a way no one ever had before and whoops, there goes civilization!
HumanSeeing t1_jdr8hao wrote
I do agree, but also i understand their point of view. If you get an agent that is not just only promted. Basically something the experiences no time. And then you have an agent that can exist and keep thinking.. i think that is a way to get it to think of new and original ideas.
Viewing a single comment thread. View all comments