Submitted by dracount t3_zwo5ey in singularity
I have posted this to another subreddit but I think it really belongs here.
First and foremost, the latest knowledge and technology is freely available by anyone who wants to try it. The greatest advances at the moment is currently being produced and backed by multi billion dollar companies, Microsoft, Google ... etc with primarily capitalistic goals in mind, or governments, China and Russia who control the and implement the AI there. Even openAI is a for profit company.
Secondly, AI already has started replacing jobs and will continue to do so at an increasing pace. Eg. 2016 Foxxcon (the Chinese factory making iPhones) replaced 600,000 jobs with robots. Currently this is the "burning topic of the day" with graphic designers and digital artists as to the releases of technologies such as midjourney and stable diffusion. It is estimated anything from 10 to 50 percent of jobs may be replaced by AI and robots in the next decade.
If the job losses reach such high numbers this will cause massive social disruptions, likely ushering in the fall of capitalism to be replaced by something like a Cyberocracy (a government run by AI) or socialist or communist ideologies, with the potential of AI to accommodate the basic needs of the population (food, water, electricity, etc).
Can and should we give over the autonomy of our governments to the AI? Governed by pure logic and calculations? Unable to understand emotions or empathy? On the other hand it may be able to make many better decisions then our politicians can. Without bias, prejudice, corruption and self interested motivations? China already have AI "advising" every court ruling.
There are many governments whose people suffer at the hands of evil regimes, whose people suffer and are ruled in tyranny.
But can you still entrust decisions such as abortion rights, gun laws, capital punishment and animal rights to machines?
It's a crazy time and I think wisdom will be in short supply to assist in thinking about these decisions.
The government's hopelessly slow and uninformed involvement is especially worrying (as illustrated by the Cambridge Analytica saga). Can you imagine what someone or some government could potentially do? Nevermind the dangerous possibilities of robot soldiers, drones and police forces.
Currently it's the Capitalistic West vs the Autocratic East. Both have access to the same technology, both are dangerous and have their flaws and neither is built with AI in mind.
This technology is going to change everything and I hope there are people out there thinking about these sorts of things. And more than that, it is moving forward far faster than we have the capacity to think about.
I don't know the answer, but the ones currently creating the AI make me very concerned about the future.
ngnoidtv t1_j1wfcfu wrote
A future ruled by AI/AGI will be far weirder and more complex than anything modern sci-fi is capable of conceiving.
Things like the 'paperclip maximizer' or the 'terminator scenario' merely present us with a primitive anthropocentric understanding of this future -- unless somebody deliberately uses AI in order to inflict destruction and chaos, in which case; it's not the AI's fault.
Think of how we rescue Koalas from bushfires, and give them veterinary care -- while at the same time poaching rhinos and elephants for their ivory. Or how we just drive to work and accidentally run a cat over, then keep driving. Compared to animals, we are Gods. The AI future will probably be a mixed bag like that -- but still more complicated and unimaginable.