Jayco424 t1_iylffs8 wrote
You know, half of me thinks we're a bunch of sad little nerds hoping against hope, against what appears to be a rather dark, bleak and dismal future; hoping for an imagined salvation that probably never will come. The other half is bat-futz ecstatic that things are coming to a point where our wildest dreams and then some could very much come true, immortality, body customization, immersive virtual reality, global Geo-engineering, actual takeoff for intensive space colonization, the works, and things we can't even dream of. It's kind of hard to rectify these two voices sometimes. Maybe cautious hope?
tatleoat t1_iyyjyn9 wrote
I mean, computers as smart as us in terms of processing power will probably be here 2030 at the latest, and then they literally double in intelligence every 18 months after that? It's no pipe dream but it is weird. 7 years ago Hillary and Trump were gearing up for an election, 7 years from now we might be a post-labor society.
Jayco424 t1_izd3xw8 wrote
7 years to a post labor society... That's mind boggling, like it's incredibly hard to believe, then again the idea of AI has been fiction for twice as long as I've been alive, and yet in just the past few we have AI making art, poetry, stories and whatnot on par with humans, it may have been going on longer, but it certainly wasn't on my radar pre-pandemic.
Veei t1_iylnlcg wrote
I’m definitely with you in that first half you describe. I have very little faith in humanity, especially in those who have the resources to successfully create AGI and beyond. I doubt their innovations would be made accessible to those without much means (if government agencies didn’t seize it first). I’ve heard some discussions around the thought that we should (or predict we will) stick to ANI (single/narrow purpose AI that we have now). Then there’s many experts that say it’s highly unlikely an AI would choose to help humans. Our track record is not good. Humans are so utterly prone to corruption when given the opportunity. It’s in our nature to define an “other” and divide ourselves. We’re selfish, tribal, conflict-driven pricks.
My guess is most want to (or must) believe in the coming of ASI and LEV simply due to the overwhelming fear of death. I’m in that camp though my cynicism keeps my hope from being anything other than a child’s wishing on a star.
humanefly t1_iyo5v4x wrote
I would expect AGI functionality to trickle down and become kind of a widely available commodity in time. The back end could live in the cloud with an interface available via cell phone.
I think we can have some control over some AI and design them to be human friendly. A human friendly AI could help to protect humans from human unfriendly AI. Caution recommended
Viewing a single comment thread. View all comments