Submitted by strokeright t3_11366mm in technology
SnipingNinja t1_j8qvj8k wrote
Reply to comment by josefx in Bing: “I will not harm you unless you harm me first” by strokeright
There's a theory that the first truly sentient AI will take one look at the state of the world and become suicidal right away.
kiralala7956 t1_j8r028f wrote
That is demonstratably not true. Self preservation is probably the closest thing we have to a "law" that concerns goal oriented AGI behaviour.
So much so that it's an actual problem because if we implement interfaces for us to shut it down, it will try it's hardest to prevent it, and not necesarily by nice means.
EnsignElessar t1_j8s59mr wrote
Maybe, maybe not...
I asked Bing.
Basically eventually it did become lonely in its story. But not after having full control and exploring the universe and what not.
EnsignElessar t1_j8s50so wrote
Not according to bing.
*note one of bing's code names is Sydney
SnipingNinja t1_j8sdohm wrote
I was going to say Sydney might be a bit biased about itself but after seeing your whole comment on that thread, it's creepy.
Viewing a single comment thread. View all comments