perspectiveiskey
perspectiveiskey t1_j9u2auz wrote
Reply to comment by terath in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I don't want to wax philosophical, but dying is the realm of humans. Death is the ultimate "danger of AI", and it will always require humans.
AI can't be dangerous on Venus.
perspectiveiskey t1_j9u1r9n wrote
Reply to comment by VioletCrow in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
AI reduces the "proof of work" cost of an Andrew Wakefield paper. This is significant.
There's a reason people don't dedicate long hours to writing completely bogus scientific papers which will result in literally no personal gain: it's because they want to live their lives and do things like have a BBQ on a nice summer day.
The work involved in sounding credible and legitimate is one of the few barriers holding the edifice of what we call Science standing. The other barrier is peer review...
Both of these barriers are under a serious threat by the ease of generation. AI is our infinite monkeys on infinite typewriters moment.
This is to say nothing of much more insidious and clever intrusions into our thought institutions.
perspectiveiskey t1_j9s8578 wrote
Reply to comment by [deleted] in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
> It's amazing to me how easily the scale of the threat is dismissed by you after you acknowledge the concerns.
I second this.
Also, the effects of misaligned AI can entirely be mediated by so called meat-space: an AI can sow astonishing havoc by simply damaging our ability to know what is true.
In fact, I find this to be the biggest danger of all. We already have a scientific publishing "problem" in that we have arrived at an era of diminishing returns and extreme specialization, I simply cannot imagine the real world damage that would be inflicted when (not if) someone starts pumping out "very legitimate sounding but factually false papers on vaccines side-effects".
I just watched this today where he talks about using automated code generation for code verification and tests. The man is brilliant and the field is brilliant but one thing is certain and that is that the complexity of far exceed individual humans' ability to fully comprehend.
Now combine that with this and you have a true recipe for disaster.
perspectiveiskey t1_j7kd6ee wrote
Reply to [Project] I used a new ML algo called "AnimeSR" to restore the Cowboy Bebop movie and up rez it to full 4K. Here's a link to the end result - honestly think it looks amazing! (Video and Model link in post) by VR_Angel
This is why AI was created. I think we can call it now.
Jokes aside, thank you for doing this. It looks fantastic.
perspectiveiskey t1_j5hxld7 wrote
Reply to comment by HatsusenoRin in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Or in human speech. Very faint but there, so that no matter how anonymously you post, you write a sentence of enough words and your identity leaks.
Kinda one of the premises of the book Dodge from Neil Stephenson.
perspectiveiskey t1_j9u6u27 wrote
Reply to comment by terath in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
Let me flip that on its head for you: what makes you think that the Human-like AI is something you will want to be your representative?
What if it's a perfect match for Jared Kushner? Do you want Jared Kushner representing us on Alpha Centauri?
Generally, the whole AI is fine/is not fine debate always comes down to these weird false dichotomies or dilemnas. And imo, they are always rooted in the false premise that what makes humans noble - what gives them their humanity - is their intelligence.
Two points: a) AI need not be human like to have the devastating lethality, and b) a GAI is almost certainly not going to be "like you" in the way that most humans aren't like you.
AI's lethality comes from its cheapness and speed of deployment. Whereas a Jared Kushner (or insert your favorite person to dislike) takes 20 years to create out of scratch, AI takes a few hours.