Daealis
Daealis t1_j8vg7b8 wrote
Kurzweil interview I think. Or maybe in his earlier books. Somewhere around 2010ish.
Daealis t1_ix968nu wrote
Reply to comment by Asneekyfatcat in How do you think about the future of AI? by diener1
air gapped systems with hardware limitations will reach an equilibrium where internal optimization will not physically be able to cram more sophisticated logics into it. Which is what I'm referring to there with the slowing it down. The only way to slow a true AI down, restrict the physical size of it.
Once a true AI is released into the wild, that genie is not going back in the bottle.
Daealis t1_ix7pg0u wrote
Reply to How do you think about the future of AI? by diener1
In the short term: AI revolution would quite literally be just that. When anything and everything is going to get checked and calculated to a higher degree of certainty than our current shit - and the numbers can be backed up with data - the resulting upheaval of many currently broken systems will be gigantic and total. Everything from poorly designed to shittily implemented can be fixed or replaced. From manual labor to justice systems, there isn't a thing that couldn't be improved. An AI could be utilized to design a general less intelligent system to observe all automated production and slingshot us to a post-scarcity society. Unemployment wouldn't be a dirty word when it is only a luxury used to supplement your lifestyle, not something you need to survive.
Or AI would used to facilitate a total societal collapse where the rich and haves will simply wall themselves into independent communities of automated utopia, while the rags of humanity will starve outside.
Longer term: An AI doesn't seem like the type of invention to stagnate without improving. And at any given point, humanity will start to slow it down, or trying to prevent this. At which point any number of movies with grey goo/ai apocalypse scenarios come to mind. Ultimately I still think humanity is a tenacious and destructive species that will prove in any cost-benefit analysis to be so risky that it would be a mistake to try and wipe us out, rather than assimilating us. So, I don't see it likely that we'll get into a war with artificial intelligence. We might get augmented over time, to the point where it's impossible to know where you end and the AI begins, but at that point we are likely already immortal and exploring a galaxy of practically infinite resources, with the AI being an integral part of all of us, most likely fragmented in its goals and desires as much as we are.
Either way, I'd say there's a high probability of it benefiting us, from my point of view.
Daealis t1_j9sem7d wrote
Reply to comment by 94746382926 in Seriously people, please stop by Bakagami-
People, as a general rule, are idiots. Ask anyone who's met me!
But on a serious note, I don't think polling works. Polling requires less engagement than typing out a comment, so people that only casually stroll by to read topics and chuckle at memes, but never contribute otherwise, will vote on a poll.
You're not getting an accurate picture from people who engage with content.
AI generated chatter seems like meme content at this point. When we reach a point where the AI is conscious rather than just an overcomplicated predetermined logic tree, then it feels to me like we should return to the discussion with a different tone.