Submitted by DonOfTheDarkNight t3_118emg7 in singularity
sticky_symbols t1_j9m8yn3 wrote
Reply to comment by FirstOrderCat in What are your thoughts on Eliezer Yudkowsky? by DonOfTheDarkNight
Asimov's rules don't work, and many of the stories were actually about that. But they also don't include civilization ending mistakes. The movie I Robot actually did a great job updating that premise, I think.
One counterintuitive thing is that people in the field of AI are way harder to convince than civilians. They have a vested interest in research moving ahead full speed.
As for your bs detector, I'm don't know what to say. And I'm not linking this account to my real identity. You can believe me or not.
If you're skeptical that such a field exists, you can look at the Alignment Forum as the principle place that we publish.
FirstOrderCat t1_j9ma8lr wrote
> Asimov's rules don't work
you jump to another topic. Initial discussion was that Azimov rules brought much more awareness, and you can't point on similar material results from Yudkovsky.
sticky_symbols t1_j9mbzia wrote
Sorry; my implication was that Asimov introduced the topic but wasn't particularly compelling. Yudkowsky created the first institute and garnered the first funding. But of course credit should be broadly shared.
Viewing a single comment thread. View all comments