Submitted by DonOfTheDarkNight t3_118emg7 in singularity
FirstOrderCat t1_j9hxjzo wrote
Reply to comment by sticky_symbols in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
I argued with him on hacker news, and he is very reactive when reading something he doesn't like.
sticky_symbols t1_j9i0yw8 wrote
Well, he's the father of a whole field that might determine the future of humanity. It would be tough to keep it cool the 1009th time you've seen the same poorly thought out dismissal of the whole thing. If I were in his shoes I might be even crankier.
FirstOrderCat t1_j9ifrw4 wrote
I don't know much about his practical achievements in this area.
sticky_symbols t1_j9itrli wrote
Founding a field is a bit of a rare thing
FirstOrderCat t1_j9j9qlg wrote
which field? ai danger awareness? It was in the terminator movie.
sticky_symbols t1_j9m3uus wrote
Good point, but those didn't convince anyone to take it seriously because they didn't have compelling arguments. Yudkowsky did.
FirstOrderCat t1_j9m6fj2 wrote
>but those didn't convince anyone to take it seriously
Lol, I totally got the idea that rogue robot can start killing humans long before I learn about Yudkowsky existance.
> Yudkowsky did.
could you support your hand-waving by any verifiable evidence?
sticky_symbols t1_j9m6t5d wrote
Well, I'm now a professional in the field of AGI safety. Not sure how you can document influence. I'd say most of my colleagues would agree with that. Not that it wouldn't have happened without him but might've taken many more years to ramp up the same amount.
FirstOrderCat t1_j9m8bhd wrote
> Not that it wouldn't have happened without him but might've taken many more years to ramp up the same amount.
happened what exactly? what are the material results of his research?
I think Azimov's with his rules produces earlier and much stronger impact.
> I'm now a professional in the field of AGI safety
Lol, you adding AGI makes my bs detector beeping extremely loud.
Which AGI exactly you are testing for safety?
sticky_symbols t1_j9m8yn3 wrote
Asimov's rules don't work, and many of the stories were actually about that. But they also don't include civilization ending mistakes. The movie I Robot actually did a great job updating that premise, I think.
One counterintuitive thing is that people in the field of AI are way harder to convince than civilians. They have a vested interest in research moving ahead full speed.
As for your bs detector, I'm don't know what to say. And I'm not linking this account to my real identity. You can believe me or not.
If you're skeptical that such a field exists, you can look at the Alignment Forum as the principle place that we publish.
FirstOrderCat t1_j9ma8lr wrote
> Asimov's rules don't work
you jump to another topic. Initial discussion was that Azimov rules brought much more awareness, and you can't point on similar material results from Yudkovsky.
sticky_symbols t1_j9mbzia wrote
Sorry; my implication was that Asimov introduced the topic but wasn't particularly compelling. Yudkowsky created the first institute and garnered the first funding. But of course credit should be broadly shared.
burnt_umber_ciera t1_j9inifr wrote
"My accolades being the best pessimist." But, he might be right.
Viewing a single comment thread. View all comments