Viewing a single comment thread. View all comments

perspectiveiskey t1_j9u2auz wrote

I don't want to wax philosophical, but dying is the realm of humans. Death is the ultimate "danger of AI", and it will always require humans.

AI can't be dangerous on Venus.

2

terath t1_j9u4o7b wrote

If we're getting philosophical, in a weird way if we ever do manage to build human-like AI, and I personally don't believe were at all close yet, that AI may well be our legacy. Long after we've all died that AI could potentially still survive in space or in environments we can't.

Even if we somehow survive for millenia, it will always be near infeasible for us to travel the stars. But it would be pretty easy for an AI that can just put itself in sleep mode for the time it takes to move between system.

If such a thing happens, I just hope we don't truly build them in our image. The universe doesn't need such an aggressive and illogical species spreading. It deserves something far better.

1

perspectiveiskey t1_j9u6u27 wrote

Let me flip that on its head for you: what makes you think that the Human-like AI is something you will want to be your representative?

What if it's a perfect match for Jared Kushner? Do you want Jared Kushner representing us on Alpha Centauri?

Generally, the whole AI is fine/is not fine debate always comes down to these weird false dichotomies or dilemnas. And imo, they are always rooted in the false premise that what makes humans noble - what gives them their humanity - is their intelligence.

Two points: a) AI need not be human like to have the devastating lethality, and b) a GAI is almost certainly not going to be "like you" in the way that most humans aren't like you.

AI's lethality comes from its cheapness and speed of deployment. Whereas a Jared Kushner (or insert your favorite person to dislike) takes 20 years to create out of scratch, AI takes a few hours.

2