Submitted by Sure_Cicada_4459 t3_127fmrc in singularity
Jeffy29 t1_jeeo3va wrote
Reply to comment by Sure_Cicada_4459 in The only race that matters by Sure_Cicada_4459
>One thing I keep seeing is that people have been making a buttload of assumptions that are tainted by decades of sci-fi and outdated thought. Higher Intelligence means better understanding of human concepts and values, which means easier to align.
I am so tired of the "tell AI to reduce suffering, it concludes killing all humans will reduce suffering for good" narrative. It's made up bs by people who have never worked on these things and has a strong stench on human-centric chauvinism where it assumes even advanced super intelligence is actually a total moron compared to the average human, it's somehow capable of wiping humanity and at the same time is a complete brainlet.
FaceDeer t1_jef9cg6 wrote
Indeed. A more likely outcome is that a superintelligent AI would respond "oh that's easy, just do <insert some incredibly profound solution that obviously I as a regular-intelligent human can't come up with>" And everyone collectively smacks their foreheads because they never would have come up with that. Or they look askance at the solution because they don't understand it, do a trial run experiment, and are baffled that it's working better than they hoped.
A superintelligent AI would likely know us and know what we desire better than we ourselves know. It's not going to be some dumb Skynet that lashes out with nukes at any problem because nukes are the only hammer in its toolbox, or whatever.
fluffy_assassins t1_jefjf6h wrote
Measure me think of sentient yogurt.
Viewing a single comment thread. View all comments