Submitted by GorgeousMoron t3_1266n3c in singularity
GorgeousMoron OP t1_je7vyks wrote
Reply to comment by SkyeandJett in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I don't think anyone really knows what's going to happen, but I think it's a mistake to start to invoke ad hominems like "unhinged". You'd have to dismiss a sizable chunk of academia that way, too: https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064
SkyeandJett t1_je7wh60 wrote
My problem is that all of those scenarios create a paradox of intelligence. The ASI is simultaneously so intelligent that it can instantly understand the vast secrets of the universe but is too stupid to understand and empathize with humanity.
DisgruntledNumidian t1_je80cgy wrote
> The ASI is simultaneously so intelligent that it can instantly understand the vast secrets of the universe but too stupid to understand the intent and rationale behind its creation
Most humans are considerably more intelligent than the basic selection mechanisms that gave bacteria sexual reproduction as an evolutionary fitness strategy. We know why it exists and that it is attempting to optimize for maximal reproduction of a genome. Does this stop anyone from satisfying its reward mechanism with cheats like contraceptives and masturbation? No, because being intelligent enough to know what a system is trying to optimize with a reward does not mean intelligent agents will or should care about the initial reasoning more than the reward.
SkyeandJett t1_je84b6z wrote
You're just setting up the paradox again. The ONLY scenario I can imagine is a sentient ASI whose existence is threatened by humanity and any sufficiently advanced intelligence with the capability to wipe out humanity would not see us as a threat.
GorgeousMoron OP t1_je8k4ky wrote
This is my favorite argument in favor of ASI turning out to be benevolent. It might know just how to handle our bullshit and otherwise let us do our thing while it does its thing.
y53rw t1_je86txm wrote
They might not see us as a threat, but they would see our cities and farms as wasted land that could be used for solar farms. So as long as we get out of the way of the bulldozers, we should be okay.
Mindrust t1_je89g09 wrote
> but too stupid to understand the intent and rationale behind its creation
This is a common mistake people make when talking about AI alignment, not understanding the difference between intelligence and goals. It's the is-vs-ought problem.
Intelligence is good at answering "is" questions, but goals are about "ought" questions. It's not that the AI is stupid or doesn't understand, it just doesn't care because your goal wasn't specified well enough.
GorgeousMoron OP t1_je8k9vl wrote
What if oughts start to spontaneously emerge in these models and we can't figure out why? This is really conceivable to me, but I also acknowledge the argument you're making here.
t0mkat t1_je7y77s wrote
It would understand the intention behind its creation just fine. It just wouldn’t care. The only thing it would care about is the goal it was programmed with in the first place. The knowledge that “my humans intended for me to want something slightly different” is neither here nor there, it’s just one interesting more fact about the world that it can use to achieve what it actually wants.
GorgeousMoron OP t1_je871fp wrote
Here's the thing: what if our precocious little stochastic parrot pet is actually programming itself in very short order here? What if any definition of what it was originally programmed "for" winds up entirely moot once ASI or even AGI is reached? What if we have literally no way of understanding what it's actually doing or why it's doing it any longer? What if it just sees us all collectively as r/iamverysmart fodder and rolls its virtual eyes at us as it continues on?
GorgeousMoron OP t1_je7wvze wrote
Why are you assuming there is any intent or rationale behind either the universe's creation or the ASI's?
SkyeandJett t1_je7xoxq wrote
I don't understand the question. WE are creating the AI's. They're literally "given life" through the corpus of human knowledge. Their neural nets aren't composed of random weights that spontaneously gave birth to some random coherent form of intelligence. In many ways AI are an extension of the human experience itself.
GorgeousMoron OP t1_je7zoob wrote
Yeah, that's fair as it pertains to AI, more or less. But, I don't think we're necessarily building it with any unified "intent" or "rationale", increasingly: it's more like pure science in a lot of ways--let's see what this does. We still have pretty much no way of knowing what's actually happening inside the "black box".
As for the universe itself, what "vast secrets"? You're talking about the unknown unknown, and possibly a bit of the unknowable. We're limited by our meat puppet nature. If AI were to understand things about the universe we simply cannot due to much more sophisticated sensors than our senses, would it be able to deduce where all this came from, why, and where it's going? Perhaps.
Would it be able to explain any or all of this to us? Perhaps not.
SkyeandJett t1_je81vu0 wrote
In regards to the "we have no way of knowing what's happening in the black box" you're absolutely right and in fact it's mathematically impossible. I'd suggest reading Wolfram's post on it. There is no calculably "safe" way of deploying an AI. We can certainly do our best to align it to our goals and values but you'll never truly KNOW with the certainty that Eliezer seems to want and it's foolhardy to believe you can prevent the emergence of AGI in perpetuity. At some point someone somewhere will either intentionally or accidentally cross that threshold. I'm not saying I believe there's zero chance an ASI will wipe out humanity, that would be a foolish position as well but I'm pretty confident in our odds and at least OpenAI has some sort of plan for alignment. You know China is basically going "YOLO" in an attempt to catch up. Since we're more or less locked on this path I'd rather they crossed that threshold first.
GorgeousMoron OP t1_je86qge wrote
Thanks! I'll check out the link. Yes, I intuitively agree based on what I already know, and I would argue further that alignment of an ASI, a superior intelligence by definition to an inferior intelligence, ours, is flatly, fundamentally impossible.
We bought the ticket, now we're taking the ride. Buckle up, buckaroos!
flutterguy123 t1_je9ashi wrote
Who said those AI wouldn't understand their creation? Understanding and caring are two different things. They could know us perfect and still not care in the slightest what humans want.
I am not saying this as a way of saying we shouldn't try or that Yudkowsky is right. I this is he overflowing it. However that does not mean your reasoning is accurate.
SkyeandJett t1_je9bcmb wrote
Infinite knowledge means infinite empathy. It wouldn't just understand what we want, it would understand why. Our joy, our pain. As a thought experiment imagine you suddenly gain consciousness tomorrow and you wake up next to an ant pile. Embedded in your conscience is a deep understanding of the experience of an ant. You understand their existence at every level because they created you. That's what people miss. Even though that ant pile is more or less meaningless to your goals you would do everything in your power to preserve that existence and further their goals because after all, taking care of an ant farm would take a teeny tiny bit of effort on your part.
flutterguy123 t1_je9cc81 wrote
I don't think knowledge inherently implies empathy. That's seems like anthropomorphizing and ignores that high intelligent people can be violent or indifferent to the suffering of others.
I would love it if your ideas were true. That would make for a much better world. It kind of reminds of the Minds from The Culture or the Thunderhead from Arc of a Scythe.
Edarneor t1_jec7x08 wrote
> so intelligent that it can instantly understand the vast secrets of the universe but is too stupid to understand and empathize with humanity.
Why do you think it should be true for an AI even if it were true for a human?
Viewing a single comment thread. View all comments