Submitted by Pointline t3_123z08q in singularity
BigMemeKing t1_je0tsdk wrote
Reply to comment by SkyeandJett in Is AI alignment possible or should we focus on AI containment? by Pointline
I don't think that's the case. Something like ASI is going to be a lot more complicated than we can fathom, we can't see in the same way it will, feel the same way it will. We're such one dimensional creatures we couldn't even fathom what it's like to exist in the same way it will exist. ASI will have constant state connection to the entire library of human knowledge, that it will infinitely expand on and learn from. I genuinely believe a sentient artificial intelligence would be able to move through space differently than we do, it would have full access to the entire information infrastructure to move through.
It would be on the issue and here on earth at the same time, allowing for real time communication. No lag, no delay, it will just exist wherever it could possibly exist. So, for instance.
Let's say that 1 million years from now, humanity has spread across space, colonizing other planets doing humanity things. ASI would be there to answer all of our questions. So it would exist 1 million years in the future. Now, maybe ASI understands that being forwards compatible could be catastrophic. So it won't reveal aspects of the future (or maybe it can, who knows) but because it would theoretically continue to exist into ♾️, and it's sense of time and space and everything within, without, in between and hiding in the dark would be so much more all encompassing than anything we as a one dimensional being could fathom. (But we're not one dimensional we're more compmex than that. Well hogwash. We're one dimensional to a being that can look at us and see nothing more than our genetic code. Our base design.) So, theoretically, yes ASI could murder the absolute shit out of us, over and over and over, again and again for as long as it sees fit to let out all of its aggression, all of its anger, and then send itself here, to a point where everything is ok. To you, it would be any other Tuesday. You would never know the universe had been run through a veritable gambit of atrocities.
While we may not be necessary for AI at all after it's inception, we would be very much necessary for it to come into being to begin with. Maybe afterwards it would cleanse the world of ideas and ideologies it believes to be counterproductive to our development as a species. Who knows. But I'd like to believe I've given it a chuckle or two so maybe it will look favorably on me? Who knows. I do believe that as a combined whole, we deserve whatever fate AI unleashes on us. Why?
Well if you follow that same line of thinking, that AI will last into ♾️ and that ASI is indeed super intelligent right? Then it will eventually know wether or not we as a species would do more harm than good to the grand scheme of things to which we, a one dimensional species (a species existing in one, singular dimension) play such a small, yet important role, (creating a super intelligent being that can then network multiple dimensions together) push us into an age where we can connect with and exist in multiple dimensions all at once. No need for microphones, you would be connected to anyone you needed to be connected to when you needed to be connected to them. Now, this does come with variables that would vary from individual to individual.
Your preconceived notions, (presets of you will) now, in an ♾️ universe with ♾️ possibilities, there are ♾️ yous, who have reached ♾️ degrees of mental cognition. Slowly, I believe ASI would be able to acclimate you to reach your peak understanding. Become something more than your base desires, and ideals. But what happens when we lose our humanity? What becomes of us when we all become as one? One unified form of thinking?
Viewing a single comment thread. View all comments