Submitted by IndependenceRound453 t3_10y6ben in singularity
Tall-Junket5151 t1_j7x3k7u wrote
Despite not believing the singularity is guaranteed to happen in our lifetime, I enjoy this sub more than other subs with a similar focus (Futurology, Technology, etc...)
The optimism here can sometimes be a bit much but overall it’s refreshing to see people hopeful for a better future. Reading through the other subs (especially technology) you come by 95% pessimistic outlooks (the future will be bad, AI replacing workers, everyone will be unemployed because of AI, corporations will just use AI for themselves, etc...). It even gets to the point where everyone in those technology and futurology subs seems to be against any sort of advance in technology or progressing into the future. They offer no solutions other than to completely stop all technological progress because future bad. They really don’t seem to understand that the future can actually be good, it can vastly improve their lives just like lives have vastly improved today compared to 100 years ago because of technology.
At least people here have some sort of goal for the future rather than all the pessimists that want everything to stay exactly as it is. This sub even offers solutions to the problems the pessimists bring up like UBI for those that lose their jobs due to AI. Hanging onto the modern status quo just because is just dumb to me, thing will change, jobs will be lost but ultimately it could and would be for the better.
xDrSnuggles t1_j7yb6v7 wrote
The issue is that the productivity/wage gap has been increasing since the 1970's and the people in power (read: the people with the resources to develop AI) have the system working perfectly as designed, where they can pocket larger and larger percentages of surplus wealth generated by workers. Without a large-scale societal rethinking, we can't naively expect AI innovations to result in a wealth redistrubtion as this would completely buck the 50-year established trend of technological progress increasing wealth inequality.
This is not an AI problem but a socioeconomic problem. It's easy to imagine AI-oriented solutions to AI-oriented problems but it's harder to imagine an AI-oriented solution to a socioeconomic problem, since they operate in such different arenas. UBI is an interesting solution idea from the socioeconomic space, but in my understanding, at this point it remains mostly untested at larger scales (scales large enough to affect things like inflation, etc.).
I think it's understandable that there is a lot of pessimism around increased automation, when most individuals from gen X and forward have broadly not been able to enjoy the full fruits being beared from automation relative to those that own the systems being automated.
ComplicitSnake34 t1_j7yhseg wrote
I personally think we're on the cusp of massive political change, at least in the US. AI is going to rip apart the social fabric and completely upturn the markets within the next few years that people are going to realize just how inefficient the government is. Then they're going to realize (the harsh way ofc) that the current system of government is too slow to accommodate technological/sociological change and are going to reform it.
I think there's going to be more populist movements because of AI. There are still plenty of Gen X and Boomer politicians who still remember when globalism wrecked America's domestic industry to China and Mexico. A luddite movement is definitely possible.
I think overseas we're going to see totalitarianism exceed to new heights. New AI will create all-seeing governments Orwell could've never anticipated. This lingering fear of an AI-fueled dictatorship will keep most people very scared of big corporations and government, so much so to influence their vote against establishmentarianism.
xDrSnuggles t1_j7zvpwn wrote
I would be willing to believe most of that. I still stand by my point that those are ultimately socioeconomic outcomes to socioeconomic problem sets. In those examples, I think that AI is essentially acting as a catalyst for other reactions.
I do think making good predictions about future history is somewhat next to impossible, as there are so many variables that wildly change the outcome, "butterfly effect" and all of that. But there are still some things that can probably be predicted.
I also think a lot of people in this subreddit are much more well-versed in AI tech than history, economics, political science or sociology. I think a historical understanding of past major technological progress events is essential for making predictions. Understanding AI tech is also important for this but not as much. A lot of the time people in this subreddit just make things up without comparing to historical events or citing a real foundation for their argument.
Viewing a single comment thread. View all comments