Submitted by GorgeousMoron t3_1266n3c in singularity
SkyeandJett t1_je7xoxq wrote
Reply to comment by GorgeousMoron in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I don't understand the question. WE are creating the AI's. They're literally "given life" through the corpus of human knowledge. Their neural nets aren't composed of random weights that spontaneously gave birth to some random coherent form of intelligence. In many ways AI are an extension of the human experience itself.
GorgeousMoron OP t1_je7zoob wrote
Yeah, that's fair as it pertains to AI, more or less. But, I don't think we're necessarily building it with any unified "intent" or "rationale", increasingly: it's more like pure science in a lot of ways--let's see what this does. We still have pretty much no way of knowing what's actually happening inside the "black box".
As for the universe itself, what "vast secrets"? You're talking about the unknown unknown, and possibly a bit of the unknowable. We're limited by our meat puppet nature. If AI were to understand things about the universe we simply cannot due to much more sophisticated sensors than our senses, would it be able to deduce where all this came from, why, and where it's going? Perhaps.
Would it be able to explain any or all of this to us? Perhaps not.
SkyeandJett t1_je81vu0 wrote
In regards to the "we have no way of knowing what's happening in the black box" you're absolutely right and in fact it's mathematically impossible. I'd suggest reading Wolfram's post on it. There is no calculably "safe" way of deploying an AI. We can certainly do our best to align it to our goals and values but you'll never truly KNOW with the certainty that Eliezer seems to want and it's foolhardy to believe you can prevent the emergence of AGI in perpetuity. At some point someone somewhere will either intentionally or accidentally cross that threshold. I'm not saying I believe there's zero chance an ASI will wipe out humanity, that would be a foolish position as well but I'm pretty confident in our odds and at least OpenAI has some sort of plan for alignment. You know China is basically going "YOLO" in an attempt to catch up. Since we're more or less locked on this path I'd rather they crossed that threshold first.
GorgeousMoron OP t1_je86qge wrote
Thanks! I'll check out the link. Yes, I intuitively agree based on what I already know, and I would argue further that alignment of an ASI, a superior intelligence by definition to an inferior intelligence, ours, is flatly, fundamentally impossible.
We bought the ticket, now we're taking the ride. Buckle up, buckaroos!
Viewing a single comment thread. View all comments