terath t1_j9sd368 wrote
Reply to comment by perspectiveiskey in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
This is already happening but the problem is humans not ai. Even without ai we are descending into an era of misinformation.
gt33m t1_j9ui6id wrote
This is eerily similar to the “guns don’t kill people” argument.
It should be undeniable that AI provides a next-generation tool to lower the cost of disruption for nefarious actors. That disruption can come in various forms - disinformation, cyber crime, fraud etc.
terath t1_j9x6v7k wrote
My point is that you don’t need ai to hire a hundred people to manually spread propaganda. That’s been going on now for a few years. AI makes it cheaper yes but banning AI or restricting it in no way fixes it.
People are very enamoured with AI but seem to ignore the already many existing technological tools being used to disrupt things today.
gt33m t1_j9xapzz wrote
Like I said this is similar to the guns argument. Banning guns does not stop people from Killing each other but easy access to guns amplifies the problem.
AI as a tool of automation is a force multiplier that is going to be indistinguishable from human action.
terath t1_j9xdc0i wrote
AI has a great many positive uses. Guns not so much. It’s not a good comparison. Nuclear technology might be better, and I’m not for banning nuclear either.
gt33m t1_j9xfxid wrote
Not certain where banning AI came into the discussion. It’s just not going to happen and I don’t see anyone proposing it. However, it shouldn’t be the other extreme either where everyone is running a nuclear plant in their backyard.
To draw parallels from your example, AI needs a lot of regulation, industry standards and careful handling. The current technology is still immature but if the right structures are not put in place now, it will be too late to put the genie back in the bottle later.
perspectiveiskey t1_j9u2auz wrote
I don't want to wax philosophical, but dying is the realm of humans. Death is the ultimate "danger of AI", and it will always require humans.
AI can't be dangerous on Venus.
terath t1_j9u4o7b wrote
If we're getting philosophical, in a weird way if we ever do manage to build human-like AI, and I personally don't believe were at all close yet, that AI may well be our legacy. Long after we've all died that AI could potentially still survive in space or in environments we can't.
Even if we somehow survive for millenia, it will always be near infeasible for us to travel the stars. But it would be pretty easy for an AI that can just put itself in sleep mode for the time it takes to move between system.
If such a thing happens, I just hope we don't truly build them in our image. The universe doesn't need such an aggressive and illogical species spreading. It deserves something far better.
perspectiveiskey t1_j9u6u27 wrote
Let me flip that on its head for you: what makes you think that the Human-like AI is something you will want to be your representative?
What if it's a perfect match for Jared Kushner? Do you want Jared Kushner representing us on Alpha Centauri?
Generally, the whole AI is fine/is not fine debate always comes down to these weird false dichotomies or dilemnas. And imo, they are always rooted in the false premise that what makes humans noble - what gives them their humanity - is their intelligence.
Two points: a) AI need not be human like to have the devastating lethality, and b) a GAI is almost certainly not going to be "like you" in the way that most humans aren't like you.
AI's lethality comes from its cheapness and speed of deployment. Whereas a Jared Kushner (or insert your favorite person to dislike) takes 20 years to create out of scratch, AI takes a few hours.
Viewing a single comment thread. View all comments