perspectiveiskey

perspectiveiskey t1_j9u6u27 wrote

Let me flip that on its head for you: what makes you think that the Human-like AI is something you will want to be your representative?

What if it's a perfect match for Jared Kushner? Do you want Jared Kushner representing us on Alpha Centauri?

Generally, the whole AI is fine/is not fine debate always comes down to these weird false dichotomies or dilemnas. And imo, they are always rooted in the false premise that what makes humans noble - what gives them their humanity - is their intelligence.

Two points: a) AI need not be human like to have the devastating lethality, and b) a GAI is almost certainly not going to be "like you" in the way that most humans aren't like you.

AI's lethality comes from its cheapness and speed of deployment. Whereas a Jared Kushner (or insert your favorite person to dislike) takes 20 years to create out of scratch, AI takes a few hours.

2

perspectiveiskey t1_j9u1r9n wrote

AI reduces the "proof of work" cost of an Andrew Wakefield paper. This is significant.

There's a reason people don't dedicate long hours to writing completely bogus scientific papers which will result in literally no personal gain: it's because they want to live their lives and do things like have a BBQ on a nice summer day.

The work involved in sounding credible and legitimate is one of the few barriers holding the edifice of what we call Science standing. The other barrier is peer review...

Both of these barriers are under a serious threat by the ease of generation. AI is our infinite monkeys on infinite typewriters moment.

This is to say nothing of much more insidious and clever intrusions into our thought institutions.

2

perspectiveiskey t1_j9s8578 wrote

> It's amazing to me how easily the scale of the threat is dismissed by you after you acknowledge the concerns.

I second this.

Also, the effects of misaligned AI can entirely be mediated by so called meat-space: an AI can sow astonishing havoc by simply damaging our ability to know what is true.

In fact, I find this to be the biggest danger of all. We already have a scientific publishing "problem" in that we have arrived at an era of diminishing returns and extreme specialization, I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".

I just watched this today where he talks about using automated code generation for code verification and tests. The man is brilliant and the field is brilliant but one thing is certain and that is that the complexity of far exceed individual humans' ability to fully comprehend.

Now combine that with this and you have a true recipe for disaster.

8