Evilsushione t1_j4co145 wrote
Reply to comment by oddlyspecificnumber7 in Does anyone else get the feeling that, once true AGI is achieved, most people will act like it was the unsurprising and inevitable outcome that they expected? by oddlyspecificnumber7
I think that is where we are heading, but I'm afraid some of the models might go rouge if we create something that is truly self aware. It would be unpredictable and very powerful. That being said I still think we need to pursue AI but we need to be diligent about preventing sentience or figuring out how to peacefully coexist with it if we do accidentally create, or extinguish if it doesn't want to peacefully coexist with us, we need to build in back doors to kill it if necessary.
AsheyDS t1_j4hhl3y wrote
What if self-awareness had limits? We consider ourselves self-aware, but we don't know everything that's going on in our brains at any given moment. If self-awareness were curtailed so it was only functional, would it be as dangerous as you anticipate?
Evilsushione t1_j4hvfyz wrote
I'm sure like everything about life there are levels and probably no definitive line between sentience and not sentient. I don't know what level would be dangerous or if any would be dangerous. Maybe we can program in such a high respect for human life that it won't be dangerous at all. Maybe a high degree of empathy for humans, some kind of mothering instinct where they WANT to take care of us. But just remember a lot of mothers will still eat their young.
Viewing a single comment thread. View all comments