Submitted by hackinthebochs t3_zhysa4 in philosophy
CaseyTS t1_izuw4tc wrote
Reply to comment by __corpse_ in AI could have 20% chance of sentience in 10 years, says philosopher David Chalmers by hackinthebochs
I'm gonna answer in terms of mass automation and machine intelligence instead of consciousness specifically. I think artificial consciousness is already a part of AI to a small extent, and will propel automation.
Whether mass AI automation helps or hurts people will, I think, depend almost entirely on how it is adopted, by whom, when, and for what. That's the story with technology: it's a crapshoot whether a new tech is adopted, whether it's useful or not. For instance, in England, they used gas lanterns instead of electric lanterns for quite a long time because that is what the infrastructure had been built to support, and it costs money to change - despite that elecrric lights take less labor, are safer, leave the air cleaner for the city's people, etc.
Likely, if artificial general intelligence becomes widespread, it'll be controlled by the people who own tech companies. Some of these people are beholden to morals and ethics, some are not. Who specifically ends up with some relevant patent may well shape how this technology develops. If someone who is interested in military and security gets a hold of this sort of tech, expect synthetic super-soldiers at first. If a philanthropist gets it, expect robots to do dangerous or humanitarian work. Those initial uses will probably shape how the technology develops in the future: people usually optimize technology for its determined uses.
Source: my ass and a Tech & Society class I took some years ago.
Viewing a single comment thread. View all comments