Viewing a single comment thread. View all comments

ErinBLAMovich t1_j9snb17 wrote

Maybe when an actual expert tells you you're overreacting, you should listen.

Are you seriously arguing that the modern world is somehow corrupted by some magical unified "postmodern philosophy"? We live in the most peaceful time in recorded history. Read "Factfulness" for exact figures. And while you're at it, actually read "Black Swan" instead of throwing that term around because you clearly need to a lesson on measuring probability.

If you think AI will be destructive, outline some plausible and SPECIFIC scenarios how this could possibly happen, instead of your vague allusions to philosophy with no proof of causality. We could then debate the likelihood of each scenario.

16

perspectiveiskey t1_j9s8578 wrote

> It's amazing to me how easily the scale of the threat is dismissed by you after you acknowledge the concerns.

I second this.

Also, the effects of misaligned AI can entirely be mediated by so called meat-space: an AI can sow astonishing havoc by simply damaging our ability to know what is true.

In fact, I find this to be the biggest danger of all. We already have a scientific publishing "problem" in that we have arrived at an era of diminishing returns and extreme specialization, I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".

I just watched this today where he talks about using automated code generation for code verification and tests. The man is brilliant and the field is brilliant but one thing is certain and that is that the complexity of far exceed individual humans' ability to fully comprehend.

Now combine that with this and you have a true recipe for disaster.

8

VioletCrow t1_j9smth5 wrote

> , I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".

I mean, just look at the current anti-vaccine movement. You just described the original Andrew Wakefield paper about vaccines causing autism. We don't need AI for this to happen, just a very credulous and gullible press.

8

governingsalmon t1_j9svhv8 wrote

I agree that we don’t necessarily need AI for nefarious actors to spread scientific misinformation, but I do think AI introduces another tool or weapon that could used by the Andrew Wakefields of the future in a way that might pose unique dangers to public health and public trust in scientific institutions.

I’m not sure whether it was malevolence or incompetence that has mostly contributed to vaccine misinformation, but if one intentionally sought to produce fake but convincing scientific-seeming work, wouldn’t something like a generative language model allow them to do so at a massively higher scale with little knowledge of a specific field?

I’ve been wondering what would happen if someone flooded a set of journals with hundreds of AI-written manuscripts without any real underlying data. One could even have all the results support a given narrative. Journals might develop intelligent ways of counteracting this but it might pose a unique problem in the future.

3

perspectiveiskey t1_j9u1r9n wrote

AI reduces the "proof of work" cost of an Andrew Wakefield paper. This is significant.

There's a reason people don't dedicate long hours to writing completely bogus scientific papers which will result in literally no personal gain: it's because they want to live their lives and do things like have a BBQ on a nice summer day.

The work involved in sounding credible and legitimate is one of the few barriers holding the edifice of what we call Science standing. The other barrier is peer review...

Both of these barriers are under a serious threat by the ease of generation. AI is our infinite monkeys on infinite typewriters moment.

This is to say nothing of much more insidious and clever intrusions into our thought institutions.

2

terath t1_j9sd368 wrote

This is already happening but the problem is humans not ai. Even without ai we are descending into an era of misinformation.

4

gt33m t1_j9ui6id wrote

This is eerily similar to the “guns don’t kill people” argument.

It should be undeniable that AI provides a next-generation tool to lower the cost of disruption for nefarious actors. That disruption can come in various forms - disinformation, cyber crime, fraud etc.

3

terath t1_j9x6v7k wrote

My point is that you don’t need ai to hire a hundred people to manually spread propaganda. That’s been going on now for a few years. AI makes it cheaper yes but banning AI or restricting it in no way fixes it.

People are very enamoured with AI but seem to ignore the already many existing technological tools being used to disrupt things today.

0

gt33m t1_j9xapzz wrote

Like I said this is similar to the guns argument. Banning guns does not stop people from Killing each other but easy access to guns amplifies the problem.

AI as a tool of automation is a force multiplier that is going to be indistinguishable from human action.

3

terath t1_j9xdc0i wrote

AI has a great many positive uses. Guns not so much. It’s not a good comparison. Nuclear technology might be better, and I’m not for banning nuclear either.

0

gt33m t1_j9xfxid wrote

Not certain where banning AI came into the discussion. It’s just not going to happen and I don’t see anyone proposing it. However, it shouldn’t be the other extreme either where everyone is running a nuclear plant in their backyard.

To draw parallels from your example, AI needs a lot of regulation, industry standards and careful handling. The current technology is still immature but if the right structures are not put in place now, it will be too late to put the genie back in the bottle later.

3

perspectiveiskey t1_j9u2auz wrote

I don't want to wax philosophical, but dying is the realm of humans. Death is the ultimate "danger of AI", and it will always require humans.

AI can't be dangerous on Venus.

2

terath t1_j9u4o7b wrote

If we're getting philosophical, in a weird way if we ever do manage to build human-like AI, and I personally don't believe were at all close yet, that AI may well be our legacy. Long after we've all died that AI could potentially still survive in space or in environments we can't.

Even if we somehow survive for millenia, it will always be near infeasible for us to travel the stars. But it would be pretty easy for an AI that can just put itself in sleep mode for the time it takes to move between system.

If such a thing happens, I just hope we don't truly build them in our image. The universe doesn't need such an aggressive and illogical species spreading. It deserves something far better.

1

perspectiveiskey t1_j9u6u27 wrote

Let me flip that on its head for you: what makes you think that the Human-like AI is something you will want to be your representative?

What if it's a perfect match for Jared Kushner? Do you want Jared Kushner representing us on Alpha Centauri?

Generally, the whole AI is fine/is not fine debate always comes down to these weird false dichotomies or dilemnas. And imo, they are always rooted in the false premise that what makes humans noble - what gives them their humanity - is their intelligence.

Two points: a) AI need not be human like to have the devastating lethality, and b) a GAI is almost certainly not going to be "like you" in the way that most humans aren't like you.

AI's lethality comes from its cheapness and speed of deployment. Whereas a Jared Kushner (or insert your favorite person to dislike) takes 20 years to create out of scratch, AI takes a few hours.

2

WarAndGeese t1_j9sj481 wrote

I agree about the callousness, and that's without artificial intelligence too. The global power balances were shifting at times of rapid technological development, and that development created control vacuums and conflicts that were resolved by war. If we learn from history we can plan for it and prevent it, but the same types of fundamental underlying shifts are being made now. We can say that international global financial incentives act to prevent worldwide conflict, but that only goes so far. All of the things I'm saying are on the trajectory without neural networks as well, they are just one of the many rapid shifts in political economy and productive efficiency.

In the same way that people were geared up at the start of the Russian invasion to Ukraine to try to prevent nuclear war, we should all be vigilant to try to globally dimilitarize and democratise to prevent any war. The global nuclear threat isn't even over and it's regressing.

1

HINDBRAIN t1_j9sthbq wrote

"Your discarded toenail could turn into Keratinator, Devourer of Worlds, and end all life in the galaxy. We need agencies and funding to regulate toenails."

"That's stupid, and very unlikely."

"You are dismissing the scale of the threat!"

−5

soricellia t1_j9tn2xi wrote

I don't even think this is a strawman mate you've mischaracterized me so badly it's basically ad hominem.

5

HINDBRAIN t1_j9tnkfa wrote

You're basically a doomsday cultist, just hiding it behind Sci-Fi language. "The scale of the threat" is irrelevant if the probability of it happening is infinestimal.

−4

soricellia t1_j9tomaw wrote

Well I think that entirely depends on what the threat is mate. The probability of AGI rising up terminator style I agree seems pretty small. The probability of disaster due to the inability of humans to distinguish true from false and fact from fiction being exasperated due to AI? That seems much higher. Also, I don't think either of us have a formula for this risk, so I think saying the probability of an event happening is infinitesimal is intellectual fraud.

6